NM000113: eeg dataset, 15 subjects#

2020 BCI competition, track 3

Access recordings and metadata through EEGDash.

Citation: Seong-Whan Lee, Klaus-Robert Müller, José del R. Millán (20). 2020 BCI competition, track 3. 10.82901/nemar.nm000113

Modality: eeg Subjects: 15 Recordings: 45 License: CC-BY-4.0 Source: nemar

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import NM000113

dataset = NM000113(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = NM000113(cache_dir="./data", subject="01")

Advanced query

dataset = NM000113(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{nm000113,
  title = {2020 BCI competition, track 3},
  author = {Seong-Whan Lee and Klaus-Robert Müller and José del R. Millán},
  doi = {10.82901/nemar.nm000113},
  url = {https://doi.org/10.82901/nemar.nm000113},
}

About This Dataset#

DOI

2020 BCI competition, track 3

Introduction

The 2020 BCI Competition Track 3 dataset contains EEG recordings from participants performing imagined speech tasks. This dataset was designed for brain-computer interface research focused on decoding imagined speech from brain signals. The dataset includes recordings from 15 subjects performing five different imagined speech commands: “Hello”, “Help me”, “Stop”, “Thank you”, and “Yes”. The data is divided into training, validation, and test sets to facilitate machine learning approaches to imagined speech classification.

Overview of the experiment

View full README

DOI

2020 BCI competition, track 3

Introduction

The 2020 BCI Competition Track 3 dataset contains EEG recordings from participants performing imagined speech tasks. This dataset was designed for brain-computer interface research focused on decoding imagined speech from brain signals. The dataset includes recordings from 15 subjects performing five different imagined speech commands: “Hello”, “Help me”, “Stop”, “Thank you”, and “Yes”. The data is divided into training, validation, and test sets to facilitate machine learning approaches to imagined speech classification.

Overview of the experiment

Participants performed imagined speech tasks where they were instructed to mentally articulate five different phrases without producing any audible speech or overt mouth movements. The five imagined speech commands were: “Hello”, “Help me”, “Stop”, “Thank you”, and “Yes”. EEG signals were recorded during these mental articulation tasks. The dataset is split into three sets: Training Set (run-00), Validation Set (run-01), and Test Set (run-02). Each recording session contains multiple trials of imagined speech, with each trial corresponding to one of the five command categories. The EEG data was recorded using a multi-channel EEG system, and the exact number of channels and their montage are preserved in the BIDS format.

Description of the preprocessing if any

The original MATLAB (.mat) files from the BCI Competition have been converted to BIDS-compliant EDF format. For training and validation sets, the data was stored in structured MATLAB arrays with fields for EEG data (‘x’), labels (‘y’), sampling frequency (‘fs’), and channel labels (‘clab’). For the test set, the data was stored in HDF5 format and labels were extracted from the Track3_Answer Sheet_Test.xlsx file. The EEG data has been scaled from the original units to Volts (multiplied by 1e-6). The epoched data structure from the original dataset has been concatenated into continuous recordings for BIDS compliance, with annotations marking the onset and duration of each imagined speech trial. Channel names and montage information from the original ‘mnt’ (montage) structure have been preserved in the BIDS format.

Description of the event values if any

The events.tsv files contain annotations for each imagined speech trial. Each event has: - onset: Time in seconds from the beginning of the recording when the imagined speech trial begins - duration: Duration of the trial in seconds (calculated as the number of samples in the epoch divided by the sampling frequency) - value: The imagined speech command label, one of: “Hello”, “Help me”, “Stop”, “Thank you”, or “Yes” - trial_type: Corresponds to the value field

These annotations enable temporal segmentation of the continuous EEG data by imagined speech command type. The labels for the training and validation sets were extracted from the ‘y’ field in the original MATLAB structures (one-hot encoded vectors converted to class indices). For the test set, labels were obtained from the Track3_Answer Sheet_Test.xlsx file provided with the competition data.

Citation

When using this dataset, please cite: 1. The 2020 BCI Competition Track 3: https://osf.io/pq7vb/overview 2. Original competition organizers and data collectors (please refer to the competition website for complete citation information)

Data curators: Pierre Guetschel (BIDS conversion)

Competition co-chairs: Seong-Whan Lee, Klaus-Robert Müller, José del R. Millán

Automatic report

Report automatically generated by ``mne_bids.make_report()``.

The 2020 BCI competition, track 3 dataset was created by Seong-Whan Lee, Klaus-

Robert Müller, and José del R. Millán and conforms to BIDS version 1.7.0. This report was generated with MNE-BIDS (https://doi.org/10.21105/joss.01896). The dataset consists of 15 participants (sex were all unknown; handedness were all unknown; ages all unknown) . Data was recorded using an EEG system sampled at 256.0 Hz with line noise at n/a Hz. There were 45 scans in total. Recording durations ranged from 155.27 to 931.64 seconds (mean = 414.06, std = 365.98), for a total of 18632.64 seconds of data recorded over all scans. For each dataset, there were on average 64.0 (std = 0.0) recording channels per scan, out of which 64.0 (std = 0.0) were used in analysis (0.0 +/- 0.0 were removed from analysis).

Dataset Information#

Dataset ID

NM000113

Title

2020 BCI competition, track 3

Author (year)

Lee2020

Canonical

Importable as

NM000113, Lee2020

Year

20

Authors

Seong-Whan Lee, Klaus-Robert Müller, José del R. Millán

License

CC-BY-4.0

Citation / DOI

10.82901/nemar.nm000113

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{nm000113,
  title = {2020 BCI competition, track 3},
  author = {Seong-Whan Lee and Klaus-Robert Müller and José del R. Millán},
  doi = {10.82901/nemar.nm000113},
  url = {https://doi.org/10.82901/nemar.nm000113},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 15

  • Recordings: 45

  • Tasks: 1

Channels & sampling rate
  • Channels: 64

  • Sampling rate (Hz): 256

  • Duration (hours): 5.175732421875

Tags
  • Pathology: Not specified

  • Modality: —

  • Type: —

Files & format
  • Size on disk: 585.2 MB

  • File count: 45

  • Format: BIDS

License & citation
  • License: CC-BY-4.0

  • DOI: 10.82901/nemar.nm000113

Provenance

API Reference#

Use the NM000113 class to access this dataset programmatically.

class eegdash.dataset.NM000113(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

2020 BCI competition, track 3

Study:

nm000113 (NeMAR)

Author (year):

Lee2020

Canonical:

Also importable as: NM000113, Lee2020.

Modality: eeg. Subjects: 15; recordings: 45; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/nm000113 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000113 DOI: https://doi.org/10.82901/nemar.nm000113

Examples

>>> from eegdash.dataset import NM000113
>>> dataset = NM000113(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#