Integrating EEG and MEG information to enhance motor-imagery classification in brain-computer interface

Abstract : Brain-computer interface (BCI) is a potential tool for rehabilitation and communication. Most of the BCI experiments relies on the electroencephalography (EEG) to translate brain signals into device commands. Despite its clinical applications, BCI faces to both engineering and user-oriented challenges to improve its spreading. One of them is related to the lack of classification accuracy. In this work, we assess the possibility of integrating electroencephalographic (EEG) and magnetoencephalographic (MEG) signals to enhance the classification performance in motor imagery (MI)-based brain-computer interface. For that purpose, we performed an offline classification from a dataset which gathers simultaneously recorded M/EEG signals from 15 healthy subjects (aged 28.13 ± 4.10 years, 7 women). We used the one-dimensional two-target box-tasks experiment in which the subjects imagined a movement with the right hand or remained at rest, depending on the position of the target. During the first five runs, only the target was displayed (training phase) followed by six runs with a provided feedback (testing phase). For each modality (EEG, magnetometers - MAG and gradiometers - GRAD), we semi-automatically extracted the relevant features, in our case (sensor; frequency) couples, from training recordings. Then, we performed a classification of the testing data by integrating the classifiers’ output from each modality via the Bayesian fusion approach, in which contribution of each modality is modulated via an attributed weight computed from the associated posterior probability. To compare classification performances between the fusion and the single-modality approach, the classification accuracy was estimated with the area under the curve (AUC). Significant changes of event-related de/synchronization (ERD/S) appeared in alpha (ERD ~ - 100 %) and beta band (ERD ~ - 60 %) in all modalities, starting after the target appearance. However, ERDs tended to appear earlier in the MEG signals than in the EEG signals. Furthermore, the relevant features extracted from MEG signals were more focused around the primary motor areas of the hand and in the alpha band (8-13 Hz). The modality significantly affects the classification accuracy (ANOVA, p < 0.001). More specifically, averages of 0.58 ± 0.07, 0.58 ± 0.09, 0.61 ± 0.10, and 0.66 ± 0.11 were obtained with EEG, MAG, GRAD and fusion classifiers respectively. Results shows that in thirteen subjects, the fusion led to an improvement of the accuracies in comparison with single-modality approach, with relative increments ranging from 1.3 % to 50.9 %. In the two other subjects, fusion gave equivalent performances to EEG. The proposed fusion method led, in a large majority of subjects, to a reduction in the subjects' mental state misclassifications. Furthermore, our weighting approach enabled to flexibly optimize the modality choice according to the subject and the session. Despite the lack of portability of current MEG devices, searches focused on sensors miniaturization will probably enable a larger diffusion of the integration of M/EEG features to further enhance BCIs performance.
Document type :
Conference papers
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01966311
Contributor : Marie-Constance Corsi <>
Submitted on : Friday, December 28, 2018 - 9:22:42 AM
Last modification on : Tuesday, April 30, 2019 - 3:44:11 PM
Long-term archiving on : Friday, March 29, 2019 - 12:42:51 PM

File

BIOMAG_poster.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01966311, version 1

Citation

Marie-Constance Corsi, Mario Chavez, Denis Schwartz, Laurent Hugueville, Ankit Khambhati, et al.. Integrating EEG and MEG information to enhance motor-imagery classification in brain-computer interface. BIOMAG 2018 - 21st International Conference on Biomagnetism, Aug 2018, Philadelphia, United States. ⟨hal-01966311⟩

Share

Metrics

Record views

92

Files downloads

37