What to Look at When Decoding Object Category Information from Electrical Brain Activations
محل انتشار: ششمین کنگره بین المللی نقشه برداری مغز ایران
سال انتشار: 1398
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 447
نسخه کامل این مقاله ارائه نشده است و در دسترس نمی باشد
- صدور گواهی نمایه سازی
- من نویسنده این مقاله هستم
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
HBMCMED06_030
تاریخ نمایه سازی: 6 آبان 1398
چکیده مقاله:
How does the human brain recognize visual objects so rapidly and accurately This is remarkable as the changes in the object/environment (e.g. such as when the object is rotated, is at a distance or illuminated from a different angle, etc.; Fig. 1) makes it highly unlikely that an individual object projects two identical images onto the retina [1]. Previous studies have looked for the neural mechanisms that support this ‘invariant object representations’ using brain imaging [2]. However, it is still unknown what dimensions of the recorded electrical brain activity (EEG) contain object category information.Method In this research, we extracted and compared the efficacy of the largest set of computational features from the literature (n=32) each of which has been suggested to contain object category information in EEG. These features ranged from the simplest signal statistics such as mean, variance, evoked signal potentials (e.g. P1; Fig. 2) to the state-of-the-art features such as phase-amplitude coupling and features derived from Convolutional Neural Networks (CNNs). We also introduced two new features called signal auto-correlation and inter-channel correlation, all of which were extracted from two object EEG datasets (i.e. 31-channel amplifier, band-passed from 0.1 to 200 Hz and notch-filtered at 50 Hz with removed eye artifacts using ICA). Finally, we used a machine learning classifier (linear discriminant analysis; pairwise classification with 10- fold cross-validation) and multivariate pattern analysis to decode object categories using each of those features separately.Results Results showed that single-valued features provided little information about object categories. However, the autocorrelation and inter-channel correlation, despite containing only 30 values, were most informative of the object categories and outperformed all single-valued and 1000-valued features (Fig. 1).Conclusions Supporting the significance of signal temporal correlations, these results provide important insights about the neural encoding of object categories and have great implications for brain-computer interface applications.
نویسندگان
Mozhgan Shahmohammadi
Faculty of Engineering, Department of Computer Engineering, Islamic Azad University, Central Tehran Branch
Saeed Setayeshi
Faculty of Engineering, Department of Computer Engineering, Islamic Azad University, Central Tehran Branch,Medical Radiation Engineering Department, Amirkabir University of Technology