NM000344: eeg dataset, 12 subjects#

CastillosBurstVEP100

Access recordings and metadata through EEGDash.

Citation: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais (2023). CastillosBurstVEP100. 10.1016/j.neuroimage.2023.120446

Modality: eeg Subjects: 12 Recordings: 12 License: CC-BY-4.0 Source: nemar

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import NM000344

dataset = NM000344(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = NM000344(cache_dir="./data", subject="01")

Advanced query

dataset = NM000344(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{nm000344,
  title = {CastillosBurstVEP100},
  author = {Kalou Cabrera Castillos and Simon Ladouce and Ludovic Darmet and Frédéric Dehais},
  doi = {10.1016/j.neuroimage.2023.120446},
  url = {https://doi.org/10.1016/j.neuroimage.2023.120446},
}

About This Dataset#

CastillosBurstVEP100

c-VEP and Burst-VEP dataset from Castillos et al. (2023)

Dataset Overview

Code: CastillosBurstVEP100 Paradigm: cvep DOI: https://doi.org/10.1016/j.neuroimage.2023.120446

View full README

CastillosBurstVEP100

c-VEP and Burst-VEP dataset from Castillos et al. (2023)

Dataset Overview

Code: CastillosBurstVEP100 Paradigm: cvep DOI: https://doi.org/10.1016/j.neuroimage.2023.120446 Subjects: 12 Sessions per subject: 1 Events: 0=100, 1=101 Trial interval: (0, 0.25) s File format: EEGLAB .set Number of contributing labs: 1

Acquisition

Sampling rate: 500.0 Hz Number of channels: 32 Channel types: eeg=32 Channel names: C3, C4, CP1, CP2, CP5, CP6, Cz, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P10, P3, P4, P7, P8, P9, Pz, T7, T8 Montage: standard_1020 Hardware: BrainProducts LiveAmp 32 Reference: FCz Ground: FPz Sensor type: eeg Line frequency: 50.0 Hz Online filters: {‘notch’: {‘freq’: 50.0, ‘bandwidth’: 0.2, ‘order’: 16, ‘type’: ‘IIR cut-band’}} Impedance threshold: 25.0 kOhm Cap manufacturer: BrainProducts Cap model: Acticap Electrode type: active

Participants

Number of subjects: 12 Health status: healthy Age: mean=30.6, std=7.1 Gender distribution: female=4, male=8 Species: human

Experimental Protocol

Paradigm: cvep Task type: target selection Number of classes: 2 Class labels: 0, 1 Trial duration: 2.2 s Tasks: visual attention, target selection Study design: factorial within-subject Study domain: BCI performance and user experience Feedback type: none Stimulus type: visual Stimulus modalities: visual Primary modality: visual Synchronicity: synchronous Mode: offline Training/test split: False Instructions: Focus on cued targets sequentially in random order Stimulus presentation: software=PsychoPy, monitor=Dell P2419HC, resolution=1920x1080, refresh_rate_hz=60

HED Event Annotations

Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser 0

     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/intensity_0

1
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/intensity_1

Paradigm-Specific Parameters

Detected paradigm: cvep Code type: burst Number of targets: 4 Cue duration: 0.5 s

Data Structure

Trials: 60 Blocks per session: 15 Trials context: 15 blocks x 4 trials per block = 60 trials per subject for burst c-VEP at 100% amplitude

Preprocessing

Data state: raw

Signal Processing

Classifiers: Convolutional Neural Network (CNN), Pearson correlation Feature extraction: CNN spatial filtering (8x1 kernel, 16 filters), CNN temporal filtering (1x32 kernel with dilation 2, 8 filters), CNN 2D convolution (5x5 kernel, 4 filters), sliding windows (250ms, 2ms stride) Frequency bands: analyzed=[0.1, 40.0] Hz Spatial filters: CNN 8x1 spatial convolution (16 filters)

Cross-Validation

Method: sequential train/test split Evaluation type: offline classification, iterative calibration (1-6 blocks)

Performance (Original Study)

Accuracy: 95.6% Itr: 67.49 bits/min Selection Time S: 1.5 Cnn Training Time S: 15.0 Burst 40 Accuracy: 94.2 Mseq 100 Accuracy: 85.0

BCI Application

Applications: reactive BCI Environment: controlled laboratory Online feedback: False

Tags

Pathology: Healthy Modality: EEG Type: reactive BCI, c-VEP, visual evoked potentials

Documentation

Description: Burst c-VEP based BCI study comparing novel burst code sequences to traditional m-sequences at two amplitude depths (100% and 40%) to optimize classification performance, minimize calibration data, and improve user experience DOI: 10.1016/j.neuroimage.2023.120446 Associated paper DOI: 10.1016/j.neuroimage.2023.120446 License: CC-BY-4.0 Investigators: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais Senior author: Frédéric Dehais Contact: kalou.cabrera-castillos@isae-supaero.fr Institution: Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO) Department: Human Factors and Neuroergonomics Address: 10 Av. Edouard Belin, Toulouse, 31400, France Country: FR Repository: Zenodo Data URL: https://zenodo.org/record/8255618 Publication year: 2023 Funding: AID (Powerbrain project), France; AXA Research Fund Chair for Neuroergonomics, France; Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France Ethics approval: University of Toulouse ethics committee (CER approval number 2020-334); Declaration of Helsinki Acknowledgements: This work was funded by AID (Powerbrain project), France, the AXA Research Fund Chair for Neuroergonomics, France and Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France. Keywords: Code-VEP, Reactive BCI, CNN, Amplitude depth reduction, Visual comfort

External Links

Abstract

The utilization of aperiodic flickering visual stimuli under the form of code-modulated Visual Evoked Potentials (c-VEP) represents a pivotal advancement in the field of reactive Brain–Computer Interface (rBCI). This study introduces Burst c-VEP, an innovative variant involving short bursts of aperiodic visual flashes at 2-4 flashes per second. The proposed burst c-VEP sequences exhibited higher accuracy (90.5%-95.6%) compared to m-sequence counterparts (71.4%-85.0%) with mean selection time of 1.5s. Reducing stimulus intensity to 40% amplitude depth only slightly decreased accuracy to 94.2% while substantially improving user experience. The collected dataset and CNN architecture implementation are shared through open-access repositories.

Methodology

Twelve healthy participants completed an offline 4-class c-VEP protocol using a factorial design. EEG was recorded at 500 Hz using BrainProducts LiveAmp 32-channel system. Participants focused on cued targets with factorial manipulation of pattern type (burst vs m-sequence) and amplitude depth (100% vs 40%). Visual stimuli were presented on a 60 Hz Dell monitor. Burst codes consisted of brief flashes (~50ms) with minimum 200ms inter-burst interval, while m-sequences used Fibonacci-type LFSR with segmented 132-frame subsequences. A CNN architecture with spatial (8x1, 16 filters), temporal (1x32, 8 filters), and 2D convolution (5x5, 4 filters) layers decoded EEG using 250ms sliding windows with 2ms stride. Calibration data ranged from 1-6 blocks (8.8-52.8s). Classification used sequential train/test splits with Pearson correlation for target selection. VEP analysis examined amplitude, latency, and inter-trial coherence. Statistical analyses used 2×2 repeated measures ANOVA.

References

Kalou Cabrera Castillos. (2023). 4-class code-VEP EEG data [Data set]. Zenodo.(dataset). DOI: https://doi.org/10.5281/zenodo.8255618 Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais. Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience,NeuroImage,Volume 284, 2023,120446,ISSN 1053-8119 DOI: https://doi.org/10.1016/j.neuroimage.2023.120446 Notes .. versionadded:: 1.1.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb

Dataset Information#

Dataset ID

NM000344

Title

CastillosBurstVEP100

Author (year)

Castillos2023_CastillosBurstVEP100

Canonical

Importable as

NM000344, Castillos2023_CastillosBurstVEP100

Year

2023

Authors

Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais

License

CC-BY-4.0

Citation / DOI

doi:10.1016/j.neuroimage.2023.120446

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{nm000344,
  title = {CastillosBurstVEP100},
  author = {Kalou Cabrera Castillos and Simon Ladouce and Ludovic Darmet and Frédéric Dehais},
  doi = {10.1016/j.neuroimage.2023.120446},
  url = {https://doi.org/10.1016/j.neuroimage.2023.120446},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 12

  • Recordings: 12

  • Tasks: 1

Channels & sampling rate
  • Channels: 32

  • Sampling rate (Hz): 500.0

  • Duration (hours): 0.8772155555555554

Tags
  • Pathology: Healthy

  • Modality: Visual

  • Type: Attention

Files & format
  • Size on disk: 150.0 MB

  • File count: 12

  • Format: BIDS

License & citation
  • License: CC-BY-4.0

  • DOI: doi:10.1016/j.neuroimage.2023.120446

Provenance

API Reference#

Use the NM000344 class to access this dataset programmatically.

class eegdash.dataset.NM000344(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

CastillosBurstVEP100

Study:

nm000344 (NeMAR)

Author (year):

Castillos2023_CastillosBurstVEP100

Canonical:

Also importable as: NM000344, Castillos2023_CastillosBurstVEP100.

Modality: eeg; Experiment type: Attention; Subject type: Healthy. Subjects: 12; recordings: 12; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/nm000344 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000344 DOI: https://doi.org/10.1016/j.neuroimage.2023.120446

Examples

>>> from eegdash.dataset import NM000344
>>> dataset = NM000344(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#