NM000162: eeg dataset, 20 subjects#

BNCI 2025-001 Motor Kinematics Reaching dataset

Access recordings and metadata through EEGDash.

Citation: Nitikorn Srisrisawang, Gernot R Müller-Putz (2024). BNCI 2025-001 Motor Kinematics Reaching dataset.

Modality: eeg Subjects: 20 Recordings: 20 License: CC-BY-4.0 Source: nemar

Metadata: Complete (90%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import NM000162

dataset = NM000162(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = NM000162(cache_dir="./data", subject="01")

Advanced query

dataset = NM000162(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{nm000162,
  title = {BNCI 2025-001 Motor Kinematics Reaching dataset},
  author = {Nitikorn Srisrisawang and Gernot R Müller-Putz},
}

About This Dataset#

BNCI 2025-001 Motor Kinematics Reaching dataset

BNCI 2025-001 Motor Kinematics Reaching dataset.

Dataset Overview

  • Code: BNCI2025-001

  • Paradigm: imagery

  • DOI: 10.1088/1741-2552/ada0ea

View full README

BNCI 2025-001 Motor Kinematics Reaching dataset

BNCI 2025-001 Motor Kinematics Reaching dataset.

Dataset Overview

  • Code: BNCI2025-001

  • Paradigm: imagery

  • DOI: 10.1088/1741-2552/ada0ea

  • Subjects: 20

  • Sessions per subject: 1

  • Events: up_slow_near=1, up_slow_far=2, up_fast_near=3, up_fast_far=4, down_slow_near=5, down_slow_far=6, down_fast_near=7, down_fast_far=8, left_slow_near=9, left_slow_far=10, left_fast_near=11, left_fast_far=12, right_slow_near=13, right_slow_far=14, right_fast_near=15, right_fast_far=16

  • Trial interval: [0, 4] s

  • File format: EEG (BrainAmp)

  • Data preprocessed: True

Acquisition

  • Sampling rate: 500.0 Hz

  • Number of channels: 67

  • Channel types: eeg=67, eog=4

  • Channel names: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EOGL1, EOGL2, EOGL3, EOGR1, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, PPO1h, PPO2h, Pz, T7, T8, TP7, TP8, targetPosX, targetPoxY, validity, vx, vy, x, y

  • Montage: af7 af3 afz af4 af8 f7 f5 f3 f1 fz f2 f4 f6 f8 ft7 fc5 fc3 fc1 fcz fc2 fc4 fc6 ft8 t7 c5 c3 c1 cz c2 c4 c6 t8 tp7 cp5 cp3 cp1 cpz cp2 cp4 cp6 tp8 p7 p5 p3 p1 pz p2 p4 p6 p8 ppo1h ppo2h po7 po3 poz po4 po8 o1 oz o2

  • Hardware: BrainAmp

  • Software: EEGLAB

  • Reference: common average

  • Sensor type: EEG

  • Line frequency: 50.0 Hz

  • Online filters: 50 Hz notch

  • Cap manufacturer: Zebris Medical GmbH

  • Cap model: ELPOS

  • Auxiliary channels: EOG (4 ch, horizontal, vertical)

Participants

  • Number of subjects: 20

  • Health status: patients

  • Clinical population: Healthy

  • Age: mean=26.1, std=4.1

  • Gender distribution: male=12, female=8

  • Handedness: {‘right’: 17, ‘left’: 3}

  • Species: human

Experimental Protocol

  • Paradigm: imagery

  • Task type: discrete reaching

  • Number of classes: 16

  • Class labels: up_slow_near, up_slow_far, up_fast_near, up_fast_far, down_slow_near, down_slow_far, down_fast_near, down_fast_far, left_slow_near, left_slow_far, left_fast_near, left_fast_far, right_slow_near, right_slow_far, right_fast_near, right_fast_far

  • Tasks: discrete reaching

  • Study design: Four-direction center-out reaching task with varying speeds (quick/slow) and distances (near/far) following visual cue, self-paced execution with eye fixation on cue

  • Feedback type: visual (cue color: green for correct, red for incorrect direction)

  • Stimulus type: visual cue

  • Stimulus modalities: visual

  • Primary modality: visual

  • Synchronicity: cue-paced

  • Mode: both

  • Instructions: Follow cue with eyes, wait at least 1s after cue stops, mimic movement while fixating eyes on cue, move smoothly with whole arm avoiding wrist rotation

HED Event Annotations

Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser

up_slow_near
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Upward
           ├─ Label/slow
           └─ Label/near

up_slow_far
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Upward
           ├─ Label/slow
           └─ Label/far

up_fast_near
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Upward
           ├─ Label/fast
           └─ Label/near

up_fast_far
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Upward
           ├─ Label/fast
           └─ Label/far

down_slow_near
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Downward
           ├─ Label/slow
           └─ Label/near

down_slow_far
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Downward
           ├─ Label/slow
           └─ Label/far

down_fast_near
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Downward
           ├─ Label/fast
           └─ Label/near

down_fast_far
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Downward
           ├─ Label/fast
           └─ Label/far

left_slow_near
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Left
           ├─ Label/slow
           └─ Label/near

left_slow_far
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Left
           ├─ Label/slow
           └─ Label/far

left_fast_near
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Left
           ├─ Label/fast
           └─ Label/near

left_fast_far
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Left
           ├─ Label/fast
           └─ Label/far

right_slow_near
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Right
           ├─ Label/slow
           └─ Label/near

right_slow_far
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Right
           ├─ Label/slow
           └─ Label/far

right_fast_near
     ├─ Sensory-event, Experimental-stimulus, Visual-presentation
     └─ Agent-action
        └─ Reach
           ├─ Right
           ├─ Label/fast
           └─ Label/near

right_fast_far
├─ Sensory-event, Experimental-stimulus, Visual-presentation
└─ Agent-action
   └─ Reach
      ├─ Right
      ├─ Label/fast
      └─ Label/far

Paradigm-Specific Parameters

  • Detected paradigm: motor_imagery

  • Number of targets: 4

  • Imagery tasks: right_hand_reaching

Data Structure

  • Trials: 960

  • Trials per class: up=240, down=240, left=240, right=240

  • Blocks per session: 10

  • Block duration: 1200.0 s

  • Trials context: per_participant (before rejection)

Preprocessing

  • Data state: preprocessed with eye artifact correction

  • Preprocessing applied: True

  • Steps: low-pass filter at 100 Hz, notch filter at 50 Hz, downsampling to 200 Hz, bad channel rejection and interpolation, bandpass filter 0.3-80 Hz, eye artifact correction via SGEYESUB, ICA with FastICA algorithm, IC artifact removal, low-pass filter at 3 Hz, downsampling to 10 Hz, bad trial rejection, common average reference

  • Highpass filter: 0.3 Hz

  • Lowpass filter: 100.0 Hz

  • Bandpass filter: {‘low_cutoff_hz’: 0.3, ‘high_cutoff_hz’: 80.0}

  • Notch filter: [50] Hz

  • Filter type: Butterworth

  • Filter order: 2

  • Artifact methods: ICA, SGEYESUB (Sparse Generalized Eye Artifact Subspace Subtraction), IClabel plugin

  • Re-reference: common average

  • Downsampled to: 200.0 Hz

  • Epoch window: [-3.0, 4.0]

  • Notes: Frontal channels (AF7, AF3, AFz, AF4, AF8) and EOG removed prior to CAR to reduce residual eye artifacts. Final analysis used 55 channels. Eye blocks recorded separately for SGEYESUB model training. Bad trials rejected based on amplitude >200 µV or standard deviation >5SD. Movement-related bad trials rejected for incorrect direction, no movement, duration <0.2s or >4s, or movement initiated <0.5s after cue stop.

Signal Processing

  • Classifiers: sLDA (shrinkage Linear Discriminant Analysis)

  • Feature extraction: Low-frequency EEG (0.3-3 Hz), Source localization (sLORETA), ICA, ROI-based features

  • Frequency bands: delta=[0.3, 3.0] Hz; analyzed=[0.3, 100.0] Hz

  • Spatial filters: Common Average Reference, Source-space projection

Cross-Validation

  • Method: stratified k-fold

  • Folds: 10

  • Evaluation type: within_session

Performance (Original Study)

  • Direction Accuracy Cstp: 39.75

  • Direction Accuracy Mon: 42.42

  • Speed Accuracy Cstp: 66.03

  • Speed Accuracy Mon: 70.49

  • Distance Accuracy Cstp: 60.83

  • Distance Accuracy Mon: 55.41

  • Quick Direction Accuracy Cstp: 44.12

  • Quick Direction Accuracy Mon: 49.67

  • Slow Direction Accuracy Cstp: 37.42

  • Slow Direction Accuracy Mon: 35.89

BCI Application

  • Applications: motor_control, rehabilitation

  • Environment: laboratory

  • Online feedback: True

Tags

  • Pathology: Healthy

  • Modality: Visual

  • Type: Motor

Documentation

  • Description: EEG dataset investigating simultaneous encoding of speed, distance, and direction in discrete hand reaching movements using a four-direction center-out task

  • DOI: 10.1088/1741-2552/ada0ea

  • License: CC-BY-4.0

  • Investigators: Nitikorn Srisrisawang, Gernot R Müller-Putz

  • Senior author: Gernot R Müller-Putz

  • Contact: gernot.mueller@tugraz.at

  • Institution: Institute of Neural Engineering, Graz University of Technology

  • Department: Institute of Neural Engineering

  • Address: Stremayrgasse 16/IV, 8010 Graz, Austria

  • Country: Austria

  • Repository: GitHub

  • Data URL: https://github.com/rkobler/eyeartifactcorrection

  • Publication year: 2024

  • Funding: Royal Thai Government (scholar funding for N.S.); BioTechMed Graz

  • Ethics approval: Ethical committee at the Graz University of Technology (EK-28/2024); Declaration of Helsinki

  • Acknowledgements: Members of the Graz BCI team, especially Markus Crell for providing motion capture software

  • Keywords: electroencephalography, brain–computer interface, source localization, discrete reaching, center-out task

Abstract

Objective. The complicated processes of carrying out a hand reach are still far from fully understood. In order to further the understanding of the kinematics of hand movement, the simultaneous representation of speed, distance, and direction in the brain is explored. Approach. We utilized electroencephalography (EEG) signals and hand position recorded during a four-direction center-out reaching task with either quick or slow speed, near and far distance. Linear models were employed in two modes: decoding and encoding. First, to test the discriminability of speed, distance, and direction. Second, to find the contribution of the cortical sources via the source localization. Additionally, we compared the decoding accuracy when using features obtained from EEG signals and source-localized EEG signals based on the results from the encoding model. Main results. Speed, distance, and direction can be classified better than chance. The accuracy of the speed was also higher than the distance, indicating a stronger representation of the speed than the distance. The speed and distance showed similar significant sources in the central regions related to the movement initiation, while the direction indicated significant sources in the parieto-occipital regions related to the movement preparation. The combination of the features from EEG and source localized signals improved the classification. Significance. Directional and non-directional information are represented in two separate networks. The quick movement resulted in improvement in the direction classification. Our results enhance our understanding of hand movement in the brain and help us make informed decisions when designing an improved paradigm in the future.

Methodology

Participants performed discrete reaching movements in four directions (up, down, left, right) with two speeds (quick: 0.4-0.8s cue duration, slow: 1.2-2.4s cue duration) and two distances (near: ~5cm/8.7cm actual, far: ~10cm/15.6cm actual). Each trial consisted of outward and inward movements. Visual cue moved from center to target position. Participants waited ≥1s after cue stop before mimicking movement with eyes fixated on cue. Hand position tracked via camera with pink marker on right index finger. 32 conditions (2 speed × 2 distance × 4 direction × 2 inward/outward) with 30 trials per class = 960 trials total per participant. After rejection, ~852 trials remained. EEG processed with EEGLAB on MATLAB R2019b. Signals epoched in two alignments: cue stop aligned (CStp: -3 to 4s) and movement onset aligned (MOn: -3 to 3s). Analysis included MRCP analysis, point-wise classification with instantaneous and windowed (500ms) features, encoding model using GLM, source localization using BEM with ICBM152 template and sLORETA inverse solution via Brainstorm, and source-space classification using data-driven ROIs derived from encoding model. Classification performed with shrinkage LDA. Permutation testing (1000 repetitions) used for significance. FDR controlled using Benjamini-Hochberg procedures.

References

Srisrisawang, N., & Muller-Putz, G. R. (2024). Simultaneous encoding of speed, distance, and direction in discrete reaching: an EEG study. Journal of Neural Engineering, 21(6). https://doi.org/10.1088/1741-2552/ada0ea Notes .. versionadded:: 1.3.0 This dataset is notable for its multi-parameter kinematic design, enabling study of how multiple movement parameters are represented simultaneously in EEG activity. The paradigm uses movement execution rather than motor imagery, making it complementary to MI datasets. The data is compatible with the MOABB motor imagery paradigm for processing purposes, though the underlying task is movement execution. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb

Dataset Information#

Dataset ID

NM000162

Title

BNCI 2025-001 Motor Kinematics Reaching dataset

Author (year)

Srisrisawang2025

Canonical

BNCI2025

Importable as

NM000162, Srisrisawang2025, BNCI2025

Year

2024

Authors

Nitikorn Srisrisawang, Gernot R Müller-Putz

License

CC-BY-4.0

Citation / DOI

Unknown

Source links

OpenNeuro | NeMAR | Source URL

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 20

  • Recordings: 20

  • Tasks: 1

Channels & sampling rate
  • Channels: 67

  • Sampling rate (Hz): 500.0

  • Duration (hours): 44.44853333333333

Tags
  • Pathology: Healthy

  • Modality: Visual

  • Type: Motor

Files & format
  • Size on disk: 15.0 GB

  • File count: 20

  • Format: BIDS

License & citation
  • License: CC-BY-4.0

  • DOI: —

Provenance

API Reference#

Use the NM000162 class to access this dataset programmatically.

class eegdash.dataset.NM000162(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

BNCI 2025-001 Motor Kinematics Reaching dataset

Study:

nm000162 (NeMAR)

Author (year):

Srisrisawang2025

Canonical:

BNCI2025

Also importable as: NM000162, Srisrisawang2025, BNCI2025.

Modality: eeg; Experiment type: Motor; Subject type: Healthy. Subjects: 20; recordings: 20; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/nm000162 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000162

Examples

>>> from eegdash.dataset import NM000162
>>> dataset = NM000162(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#