NM000264: eeg dataset, 24 subjects#
BrainInvaders2013a
Access recordings and metadata through EEGDash.
Citation: E. Vaineau, A. Barachant, A. Andreev, P. Rodrigues, G. Cattan, M. Congedo (2019). BrainInvaders2013a. 10.5281/zenodo.1494163
Modality: eeg Subjects: 24 Recordings: 292 License: CC-BY-1.0 Source: nemar
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import NM000264
dataset = NM000264(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = NM000264(cache_dir="./data", subject="01")
Advanced query
dataset = NM000264(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{nm000264,
title = {BrainInvaders2013a},
author = {E. Vaineau and A. Barachant and A. Andreev and P. Rodrigues and G. Cattan and M. Congedo},
doi = {10.5281/zenodo.1494163},
url = {https://doi.org/10.5281/zenodo.1494163},
}
About This Dataset#
BrainInvaders2013a
P300 dataset BI2013a from a “Brain Invaders” experiment.
Dataset Overview
Code: BrainInvaders2013a Paradigm: p300 DOI: https://doi.org/10.5281/zenodo.2669187
View full README
BrainInvaders2013a
P300 dataset BI2013a from a “Brain Invaders” experiment.
Dataset Overview
Code: BrainInvaders2013a Paradigm: p300 DOI: https://doi.org/10.5281/zenodo.2669187 Subjects: 24 Sessions per subject: 8 Events: Target=33285, NonTarget=33286 Trial interval: [0, 1] s Runs per session: 2 File format: mat, csv, gdf Contributing labs: GIPSA-lab
Acquisition
Sampling rate: 512.0 Hz Number of channels: 16 Channel types: eeg=16 Channel names: Fp1, Fp2, F5, AFz, F6, T7, Cz, T8, P7, P3, Pz, P4, P8, O1, Oz, O2 Montage: standard_1020 Hardware: g.USBamp (g.tec, Schiedlberg, Austria) Software: OpenVibe Reference: left earlobe Ground: FZ Sensor type: wet Silver/Silver Chloride electrodes Line frequency: 50.0 Hz Online filters: no digital filter applied Cap manufacturer: g.tec Cap model: g.GAMMAcap Electrode type: wet Electrode material: Silver/Silver Chloride
Participants
Number of subjects: 24 Health status: healthy Age: mean=25.96, std=4.46, min=20.0, max=30.0 Gender distribution: male=12, female=12 BCI experience: volunteers recruited via flyers and university mailing list Species: human
Experimental Protocol
Paradigm: p300 Task type: visual P300 BCI Number of classes: 2 Class labels: Target, NonTarget Study design: compare P300-based BCI with and without adaptive calibration using Riemannian geometry; randomised order of runs (adaptive vs non-adaptive) Feedback type: visual (Brain Invaders video game interface) Stimulus type: visual flashes Stimulus modalities: visual Primary modality: visual Mode: both Training/test split: True Instructions: destroy targets in Brain Invaders BCI video game Stimulus presentation: distance_from_screen=75 to 115 cm, screen=ViewSonic 22 inch, flash_groups=36 symbols distributed in 12 groups
HED Event Annotations
Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser Target
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Target
NonTarget
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Non-target
Paradigm-Specific Parameters
Detected paradigm: p300
Data Structure
Trials: {‘Training_Target’: 80, ‘Training_non-Target’: 400, ‘Online’: ‘variable (depends on user performance)’} Trials context: per_phase
Preprocessing
Data state: raw EEG with software tagging via USB (note: tagging introduces jitter and latency) Preprocessing applied: False Notes: Tags sent by application to amplifier through USB port and recorded as supplementary channel; tagging process identical in all experimental conditions
Signal Processing
Classifiers: xDAWN, Riemannian, RMDM (Riemannian Minimum Distance to Mean) Feature extraction: Covariance/Riemannian, xDAWN, common spatiotemporal pattern
Cross-Validation
Evaluation type: cross_session
Performance (Original Study)
Balanced Accuracy: used due to unbalanced classes (1:5 ratio Target to non-Target)
BCI Application
Applications: gaming Environment: small room (4 square meters) with one-way glass window for experimenter observation Online feedback: True
Tags
Pathology: Healthy Modality: Visual Type: Perception
Documentation
Description: EEG recordings of 24 subjects doing a visual P300 Brain-Computer Interface experiment comparing adaptive vs non-adaptive calibration using Riemannian geometry DOI: 10.5281/zenodo.1494163 Associated paper DOI: 10.5281/zenodo.2649006 License: CC-BY-1.0 Investigators: E. Vaineau, A. Barachant, A. Andreev, P. Rodrigues, G. Cattan, M. Congedo Senior author: M. Congedo Institution: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP Address: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France Country: FR Repository: Zenodo Data URL: https://doi.org/10.5281/zenodo.1494163 Publication year: 2019 Ethics approval: Approved by the Ethical Committee of the University of Grenoble Alpes (Comité d’Ethique pour la Recherche Non-Interventionnelle) Keywords: Electroencephalography (EEG), P300, Brain-Computer Interface, Experiment, Adaptive, Calibration
Abstract
This dataset contains electroencephalographic (EEG) recordings of 24 subjects doing a visual P300 Brain-Computer Interface experiment on PC. The visual P300 is an event-related potential elicited by visual stimulation, peaking 240-600 ms after stimulus onset. The experiment was designed to compare the use of a P300-based brain-computer interface with and without adaptive calibration using Riemannian geometry. EEG data were recorded using 16 electrodes during an experiment at GIPSA-lab, Grenoble, France, in 2013.
Methodology
Subjects participated in sessions with two runs (Non-Adaptive and Adaptive, randomised order). Each run had Training (calibration) and Online phases. In Non-Adaptive mode, Training data calibrated the MDM classifier for Online phase. In Adaptive mode, classifier initialized with generic class geometric means from previous experiment and continuously adapted using Riemannian method. Brain Invaders interface: 36 symbols in 12 groups, one repetition = 12 flashes (2 Target, 10 non-Target). Training phase: 80 Target and 400 non-Target flashes (fixed). Online phase: variable repetitions based on performance to destroy targets. Subjects blind to mode of operation.
References
Vaineau, E., Barachant, A., Andreev, A., Rodrigues, P. C., Cattan, G. & Congedo, M. (2019). Brain invaders adaptive versus non-adaptive P300 brain-computer interface dataset. arXiv preprint arXiv:1904.09111. Barachant A, Congedo M (2014) A Plug & Play P300 BCI using Information Geometry. arXiv:1409.0107. Congedo M, Goyat M, Tarrin N, Ionescu G, Rivet B,Varnet L, Rivet B, Phlypo R, Jrad N, Acquadro M, Jutten C (2011) “Brain Invaders”: a prototype of an open-source P300-based video game working with the OpenViBE platform. Proc. IBCI Conf., Graz, Austria, 280-283. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb
Dataset Information#
Dataset ID |
|
Title |
BrainInvaders2013a |
Author (year) |
|
Canonical |
|
Importable as |
|
Year |
2019 |
Authors |
|
License |
CC-BY-1.0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{nm000264,
title = {BrainInvaders2013a},
author = {E. Vaineau and A. Barachant and A. Andreev and P. Rodrigues and G. Cattan and M. Congedo},
doi = {10.5281/zenodo.1494163},
url = {https://doi.org/10.5281/zenodo.1494163},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 24
Recordings: 292
Tasks: 1
Channels: 16
Sampling rate (Hz): 512.0
Duration (hours): 20.632897135416663
Pathology: Healthy
Modality: Visual
Type: Attention
Size on disk: 1.7 GB
File count: 292
Format: BIDS
License: CC-BY-1.0
DOI: doi:10.5281/zenodo.1494163
API Reference#
Use the NM000264 class to access this dataset programmatically.
- class eegdash.dataset.NM000264(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetBrainInvaders2013a
- Study:
nm000264(NeMAR)- Author (year):
BrainInvaders2013- Canonical:
BrainInvaders2013a,BI2013a
Also importable as:
NM000264,BrainInvaders2013,BrainInvaders2013a,BI2013a.Modality:
eeg; Experiment type:Attention; Subject type:Healthy. Subjects: 24; recordings: 292; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/nm000264 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000264 DOI: https://doi.org/10.5281/zenodo.1494163
Examples
>>> from eegdash.dataset import NM000264 >>> dataset = NM000264(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset