Jul 07, 2025

Public workspaceFilter Bank CSP with Riemannian Weighting for Disability-Centric Motor Imagery Brain Computer Interface

  • Souissi Jihen1
  • 1FSM Monastir
Icon indicating open access to content
QR code linking to this content
Protocol CitationSouissi Jihen 2025. Filter Bank CSP with Riemannian Weighting for Disability-Centric Motor Imagery Brain Computer Interface. protocols.io https://dx.doi.org/10.17504/protocols.io.kxygx4mekl8j/v1
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: Working
We use this protocol and it's working
Created: July 06, 2025
Last Modified: July 07, 2025
Protocol Integer ID: 221808
Keywords: centric motor imagery brain computer interface, centric motor imagery brain computer interface this protocol, filter bank csp with riemannian weighting, bci, particular the feature extraction phase, motor imagery, feature extraction phase, filter bank csp, eeg, csp method, modified csp method, disability, processing signal, riemannian weighting
Abstract
This protocol is devoted to describing the steps involved in processing signal-based (EEG) motor imagery (MI) for brain-computer interfaces (BCI), from pre-processing to classification. Emphasis is placed on the various methods used, in particular the feature extraction phase carried out over several frequency bands using a modified CSP method.
Troubleshooting
Intoduction
Brain-computer interfaces (BCI) based on motor imagery (MI) enable users to interact with external systems using only their brain activity, without muscle movement. Electroencephalography (EEG) is widely used in BCI applications due to itsnon-invasive nature .
This protocol describes a comprehensive methodology for EEG signal processing using the Common Spatial Patterns (CSP) algorithm, combined with the OVR approach for multi-class context adaptation, in a non-Euclidean space. This is a Riemannian space with a variable metric for EEG signals. The approach includes multiband filtering of EEG signals, spatial feature extraction using modified CSP, named RWFBCSP, to improve class separability by incorporating Riemannian similarity weighting. We apply classification techniques such as Linear Discriminant Analysis (LDA), Random Forest Classifier (RFC) and Multilayer Perceptron (MLP) to distinguish motor imagery tasks. This methodology is validated using BCI Competition IV data (dataset 2a).
Tools used
  • dataset: BCI Competition IV set 2a
  • Langage : Python 3.6 
  • libraries : MNE, NumPy, scikit-learn and SciPy

EEG signal pre-processing
  • The development process in Python consists in loading the EEG data with the dataset extension (.gdf ).

  • Segmentation of EEG data into epochs synchronized to the stimuli (tmin/tmax setting).as well as the choice of motor tasks (event_id) corresponding to each class.

```python
epochs = mne.Epochs(raw_data, events, event_id=stims, tmin=-0.5, tmax=1.5, baseline=(-0.2, 0), preload=True)

  • Band-pass filtering using a FIR butterworth filter
EEG feature extraction( CSP Model)
This step consists in developing a customized and optimized function, based on the classic CSP model, for the extraction of discriminating features related to different motor tasks.
The function named RW-FBCSP (Riemannian-Weighting Filter Bank Common Spatial Pattern), due to the use of the CSP model with the OVR approach in a Riemannian space with Similarity Weighting.
The filtered EEG signals are passed through this function to obtain a new, more compact and informative representation of the signals (reduced feature vector), suitable for the classification stage.
Classification step
The feature vectors obtained for each subject and the associated labels are used to train the three classifiers: LinearDiscriminantAnalysis , RandomForestClassifier and MLPClassifier.
Each classifier is trained and tested with each person's data, and for each subject the Accuracy, Precision, Recal and F1-score values are calculated and stored in global lists. These values are averaged to obtain the average test results for the nine subjects.
For better performance, a majority voting method is used. This technique consists in creating a pipeline to exploit the results of the 3 classifiers to give a final decision.
Finally, a portion of the code displays a preview of the results of merging the predictions from the three classifiers.
  • exemple of voting classifier



Evaluation step
  • Metric used: Accuracy, Precision, Recal and F1-score
  • Method: Cross-validation
```python
kfold = KFold(n_splits=5, shuffle=True, random_state=42)
  • Average scores for the 9 subjects




Comments
  • The processing steps are applied independently to the training and test sets for each subject.
  • The extracted training and test data are used to train the classifiers and evaluate their performance, respectively. A validation step is used to optimize the hyperparameters, so the data are divided as follows: all test data are for evaluation. The training set is divided into a subset used for classifier training (80%) and another dedicated to hyperparameter validation (20%).
for KFold(n_splits=k) : Validation set =1/k (1 of k "folds") = 1/5 = 20% ( if k=5)
Training set =(k-1)/k = 4/5 = 20% ( if k=5)
  • After each classification step (performed by one of the three classifiers), a piece of code is used to present the classification model's performance in a structured way. Pandas.DataFrame is used to present summary tables.
  • Example of a summary table: test results obtained by RFC
Summary of RFC results