NM000311: eeg dataset, 25 subjects#

Multimodal upper-limb MI/ME EEG (Jeong et al. 2020)

Access recordings and metadata through EEGDash.

Citation: Ji-Hoon Jeong, Jeong-Hyun Cho, Kyung-Hwan Shim, Byoung-Hee Kwon, Byeong-Hoo Lee, Do-Yeun Lee, Dae-Hyeok Lee, Seong-Whan Lee (2020). Multimodal upper-limb MI/ME EEG (Jeong et al. 2020). 10.82901/nemar.nm000311

Modality: eeg Subjects: 25 Recordings: 213 License: CC0-1.0 Source: nemar

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import NM000311

dataset = NM000311(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = NM000311(cache_dir="./data", subject="01")

Advanced query

dataset = NM000311(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{nm000311,
  title = {Multimodal upper-limb MI/ME EEG (Jeong et al. 2020)},
  author = {Ji-Hoon Jeong and Jeong-Hyun Cho and Kyung-Hwan Shim and Byoung-Hee Kwon and Byeong-Hoo Lee and Do-Yeun Lee and Dae-Hyeok Lee and Seong-Whan Lee},
  doi = {10.82901/nemar.nm000311},
  url = {https://doi.org/10.82901/nemar.nm000311},
}

About This Dataset#

DOI

Jeong2020

Multimodal MI+ME dataset from Jeong et al 2020.

Dataset Overview

Code: Jeong2020 Paradigm: imagery

View full README

DOI

Jeong2020

Multimodal MI+ME dataset from Jeong et al 2020.

Dataset Overview

Code: Jeong2020 Paradigm: imagery DOI: 10.1093/gigascience/giaa098 Subjects: 25 Sessions per subject: 3 Events: reach_forward=1, reach_backward=2, reach_left=3, reach_right=4, reach_up=5, reach_down=6, grasp_cup=7, grasp_ball=8, grasp_card=9, twist_pronation=10, twist_supination=11 Trial interval: [0, 4] s Runs per session: 3 File format: BrainVision

Acquisition

Sampling rate: 1000.0 Hz Number of channels: 71 Channel types: eeg=60, eog=4, emg=7 Channel names: Fp1, AF7, AF3, AFz, F7, F5, F3, F1, Fz, FT7, FC5, FC3, FC1, T7, C5, C3, C1, Cz, TP7, CP5, CP3, CP1, CPz, P7, P5, P3, P1, Pz, PO7, PO3, POz, Fp2, AF4, AF8, F2, F4, F6, F8, FC2, FC4, FC6, FT8, C2, C4, C6, T8, CP2, CP4, CP6, TP8, P2, P4, P6, P8, PO4, PO8, O1, Oz, O2, Iz Montage: standard_1005 Hardware: BrainAmp (BrainProducts GmbH) Reference: FCz Ground: Fpz Sensor type: actiCap Line frequency: 60.0 Hz Online filters: {‘highpass’: 0.016, ‘lowpass’: 1000}

Participants

Number of subjects: 25 Health status: healthy Age: min=24.0, max=32.0 Gender distribution: female=10, male=15 Handedness: right-handed BCI experience: naive Species: human

Experimental Protocol

Paradigm: imagery Number of classes: 11 Class labels: reach_forward, reach_backward, reach_left, reach_right, reach_up, reach_down, grasp_cup, grasp_ball, grasp_card, twist_pronation, twist_supination Trial duration: 4.0 s Study design: 11 intuitive upper-limb movement tasks: 6 reaching + 3 grasping + 2 wrist twisting. MI and real movement conditions, 3 sessions. Feedback type: none Stimulus type: text cues Stimulus modalities: visual Primary modality: visual Synchronicity: synchronous Mode: offline

HED Event Annotations

Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser reach_forward

     ├─ Sensory-event
     └─ Label/reach_forward

reach_backward
     ├─ Sensory-event
     └─ Label/reach_backward

reach_left
     ├─ Sensory-event
     └─ Label/reach_left

reach_right
     ├─ Sensory-event
     └─ Label/reach_right

reach_up
     ├─ Sensory-event
     └─ Label/reach_up

reach_down
     ├─ Sensory-event
     └─ Label/reach_down

grasp_cup
     ├─ Sensory-event
     └─ Label/grasp_cup

grasp_ball
     ├─ Sensory-event
     └─ Label/grasp_ball

grasp_card
     ├─ Sensory-event
     └─ Label/grasp_card

twist_pronation
     ├─ Sensory-event
     └─ Label/twist_pronation

twist_supination
├─ Sensory-event
└─ Label/twist_supination

Paradigm-Specific Parameters

Detected paradigm: motor_imagery Imagery tasks: reach_forward, reach_backward, reach_left, reach_right, reach_up, reach_down, grasp_cup, grasp_ball, grasp_card, twist_pronation, twist_supination Imagery duration: 4.0 s

Data Structure

Trials: 41250 Trials context: 25 subjects x 3 sessions x 550 trials (300 reaching + 150 grasping + 100 twisting)

Signal Processing

Classifiers: CSP+RLDA Feature extraction: CSP Frequency bands: mu_beta=[8.0, 30.0] Hz Spatial filters: CSP

Cross-Validation

Method: 10x10-fold Folds: 10 Evaluation type: within_session

BCI Application

Applications: motor_control, prosthetics Environment: laboratory Online feedback: False

Tags

Pathology: Healthy Modality: Motor Type: Research

Documentation

DOI: 10.1093/gigascience/giaa098 License: CC0-1.0 Investigators: Ji-Hoon Jeong, Jeong-Hyun Cho, Kyung-Hwan Shim, Byoung-Hee Kwon, Byeong-Hoo Lee, Do-Yeun Lee, Dae-Hyeok Lee, Seong-Whan Lee Institution: Korea University Country: KR Data URL: https://zenodo.org/records/19021436 Publication year: 2020

References

Jeong, J.-H., Cho, J.-H., Shim, K.-H., et al. (2020). Multimodal signal dataset for 11 intuitive movement tasks from single upper extremity during multiple recording sessions. GigaScience, 9(10), giaa098. https://doi.org/10.1093/gigascience/giaa098 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb

Dataset Information#

Dataset ID

NM000311

Title

Multimodal upper-limb MI/ME EEG (Jeong et al. 2020)

Author (year)

Jeong2020

Canonical

Importable as

NM000311, Jeong2020

Year

2020

Authors

Ji-Hoon Jeong, Jeong-Hyun Cho, Kyung-Hwan Shim, Byoung-Hee Kwon, Byeong-Hoo Lee, Do-Yeun Lee, Dae-Hyeok Lee, Seong-Whan Lee

License

CC0-1.0

Citation / DOI

10.82901/nemar.nm000311

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{nm000311,
  title = {Multimodal upper-limb MI/ME EEG (Jeong et al. 2020)},
  author = {Ji-Hoon Jeong and Jeong-Hyun Cho and Kyung-Hwan Shim and Byoung-Hee Kwon and Byeong-Hoo Lee and Do-Yeun Lee and Dae-Hyeok Lee and Seong-Whan Lee},
  doi = {10.82901/nemar.nm000311},
  url = {https://doi.org/10.82901/nemar.nm000311},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 25

  • Recordings: 213

  • Tasks: 1

Channels & sampling rate
  • Channels: 71

  • Sampling rate (Hz): 1000.0

  • Duration (hours): 124.0643847222222

Tags
  • Pathology: Healthy

  • Modality: Visual

  • Type: Motor

Files & format
  • Size on disk: 88.6 GB

  • File count: 213

  • Format: BIDS

License & citation
  • License: CC0-1.0

  • DOI: 10.82901/nemar.nm000311

Provenance

API Reference#

Use the NM000311 class to access this dataset programmatically.

class eegdash.dataset.NM000311(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

Multimodal upper-limb MI/ME EEG (Jeong et al. 2020)

Study:

nm000311 (NeMAR)

Author (year):

Jeong2020

Canonical:

Also importable as: NM000311, Jeong2020.

Modality: eeg; Experiment type: Motor; Subject type: Healthy. Subjects: 25; recordings: 213; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/nm000311 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000311 DOI: https://doi.org/10.82901/nemar.nm000311

Examples

>>> from eegdash.dataset import NM000311
>>> dataset = NM000311(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#