EEG-based Visual Attentional State Decoding Using Convolutional Neural Network - Archive ouverte HAL Accéder directement au contenu
Poster De Conférence Année : 2018

EEG-based Visual Attentional State Decoding Using Convolutional Neural Network

Résumé

Maintaining sustained visual attention to a cognitive task is of high importance [1]. Recent studies in Brain-Computer Interface (BCI) using electroencephalography (EEG) shows a promising capability to reveal moment-to-moment attentional states [2]. A number of studies highlight the implication of Event-Related Potentials (ERPs) such as N170 or P300, yet some other studies identified the significance of alpha and beta bands elicited from specific parts of human scalp in spotting level of attention. In the present study, we developed a deep learning approach to analyzing the neurophysiological signals collected during a visual attention task. An individualized EEG-based classifier was developed to probe and extract underlying subject-specific features of early visual attention. Experiment Setup The BCI platform will consist of a wireless EEG headset, a workstation computer with dual monitors, data acquisition, and analysis software. A flexible Graphical User Interface (GUI) has been developed to allow the experimenter to conveniently administer the experimental protocols. The data collection protocol is borrowed and adapted from a fMRI study [32]. Simulink is adopted as the data acquisition engine; the signal processing, feature extraction, classification, presenting the images, and recording behavioral responses were all carried out within MATLAB. The participants were be primed into the subcategories throughout the experiment by asking their overt response during each trial. Data Thirty-eight healthy participants (11 females; 21.3±1.9 years and 27 males; 23.1±5.2 years) were recruited to conduct tasks on visual attentional state evaluation. The participants were exposed to 8 blocks of 50 trials. Each trial is a single image followed by a black screen. Each block starts with a 1000ms cue texture that guides participants on the requested behavioral response. Then, it continues with showing single images randomly selected from scene subcategories (Indoor scene & outdoor scene) or face subcategories (male face & female face) with a black screen between images. Each image will be shown for 1000ms and the black screen for 1000ms to 1500ms. We also collected the behavioral response via keyboard button presses. Method Information is reflected in the EEG as dynamical changes in time, frequency, and space. Among various time-frequency analysis methods, Wavelet transform stands out for the efficiency. This technique brings multi-resolution analysis to analyze EEG at different scales. EEG records for each block were divided into 50 epochs. Each epoch includes EEG records when subjects exposed to an image plus one second EEG record during the successive blank image. Results Rectified Linear Unit (ReLU) activation function was used as a nonlinear function. We also used Adam optimizer to adapt the learning rate while preventing the fluctuations of the gradient descent. On average, a dropout with a probability of 0.2 (Keep Prob = 0.8) in the fully connected layers resulted in higher accuracy over the dataset. Overall, the first approach achieved an average accuracy of 55% and the second approach achieved an average accuracy of 73%. Two different representations were studied. The spatio-spectral representations lead to more accurate results compared to spatio-temporal representations. The size and architecture of the model are designed in a way that training the model would be considerably fast (less that 1 minute for each participant's dataset using a workstation equipped with a Titan Xp GPU, an 8-core i7 CPU, and 32 GB RAM) and enable it to run on small machines. Conclusion This study shows that EEG signal can be used to distinguish attentional state using visual stimuli. The proposed deep learning approach has advantages over the conventional selective visual attention decoding procedures [3]; instead of using engineered features of the EEG, we developed a classifier based on the convolutional neural network using learned features. We are also interested in more complex snapshot-based models such as VGGNet and ResNet. The temporal correlation between samples of EEG suggests that Recurrent Neural Network (RNN) may result in a higher accuracy. We aim to apply a sequential model in the future. The findings of this study may be beneficial to people with attention deficit and attention disorder. Also, the platform may improve the understanding of the causal attributes of human visual attention.
Fichier principal
Vignette du fichier
Attention_DeepLearning_BCIMeeting2018.pdf (947.93 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01843916 , version 1 (19-07-2018)
hal-01843916 , version 2 (28-09-2018)

Identifiants

  • HAL Id : hal-01843916 , version 2

Citer

Soheil Borhani, Reza Abiri, J I Muhammad, Yang Jiang, Xiaopeng Zhao. EEG-based Visual Attentional State Decoding Using Convolutional Neural Network. 7th International BCI Meeting, May 2018, Pacific Grove, CA, United States. pp.1-D-32. ⟨hal-01843916v2⟩
177 Consultations
73 Téléchargements

Partager

Gmail Facebook X LinkedIn More