NM000127: eeg dataset, 40 subjects#
Kim2025 – 40-class beta-range SSVEP speller dataset
Access recordings and metadata through EEGDash.
Citation: Heegyu Kim, Kyungho Won, Minkyu Ahn, Sung Chan Jun (2019). Kim2025 – 40-class beta-range SSVEP speller dataset.
Modality: eeg Subjects: 40 Recordings: 240 License: CC BY 4.0 Source: nemar
Metadata: Complete (90%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import NM000127
dataset = NM000127(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = NM000127(cache_dir="./data", subject="01")
Advanced query
dataset = NM000127(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{nm000127,
title = {Kim2025 – 40-class beta-range SSVEP speller dataset},
author = {Heegyu Kim and Kyungho Won and Minkyu Ahn and Sung Chan Jun},
}
About This Dataset#
40-class beta-range SSVEP speller dataset
40-class beta-range SSVEP speller dataset.
Dataset Overview
Code: Kim2025BetaRange
Paradigm: ssvep
DOI: 10.1038/s41597-025-06032-2
View full README
40-class beta-range SSVEP speller dataset
40-class beta-range SSVEP speller dataset.
Dataset Overview
Code: Kim2025BetaRange
Paradigm: ssvep
DOI: 10.1038/s41597-025-06032-2
Subjects: 40
Sessions per subject: 6
Events: 14=1, 15=2, 16=3, 17=4, 18=5, 19=6, 20=7, 21=8, 14.2=9, 15.2=10, 16.2=11, 17.2=12, 18.2=13, 19.2=14, 20.2=15, 21.2=16, 14.4=17, 15.4=18, 16.4=19, 17.4=20, 18.4=21, 19.4=22, 20.4=23, 21.4=24, 14.6=25, 15.6=26, 16.6=27, 17.6=28, 18.6=29, 19.6=30, 20.6=31, 21.6=32, 14.8=33, 15.8=34, 16.8=35, 17.8=36, 18.8=37, 19.8=38, 20.8=39, 21.8=40
Trial interval: [0.0, 5.0] s
File format: MAT
Acquisition
Sampling rate: 1024.0 Hz
Number of channels: 31
Channel types: eeg=31, misc=2
Montage: standard_1005
Hardware: BioSemi ActiveTwo
Software: OpenViBE
Reference: CMS/DRL
Ground: CMS/DRL near Pz
Sensor type: active
Line frequency: 60.0 Hz
Impedance threshold: 5 kOhm
Cap manufacturer: BioSemi
Electrode type: wet
Electrode material: Ag/AgCl
Participants
Number of subjects: 40
Health status: healthy
Age: mean=22.8, std=3.34, min=20, max=35
Gender distribution: male=25, female=15
BCI experience: 3 of 40 had prior SSVEP-BCI experience
Experimental Protocol
Paradigm: ssvep
Task type: speller
Number of classes: 40
Class labels: 14, 15, 16, 17, 18, 19, 20, 21, 14.2, 15.2, 16.2, 17.2, 18.2, 19.2, 20.2, 21.2, 14.4, 15.4, 16.4, 17.4, 18.4, 19.4, 20.4, 21.4, 14.6, 15.6, 16.6, 17.6, 18.6, 19.6, 20.6, 21.6, 14.8, 15.8, 16.8, 17.8, 18.8, 19.8, 20.8, 21.8
Trial duration: 5.0 s
Feedback type: none
Stimulus type: JFPM visual flicker
Stimulus modalities: visual
Primary modality: visual
Synchronicity: synchronous
Mode: offline
Training/test split: True
HED Event Annotations
Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser
14
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14
15
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15
16
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/16
17
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/17
18
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/18
19
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/19
20
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/20
21
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/21
14.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_2
15.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_2
16.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/16_2
17.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/17_2
18.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/18_2
19.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/19_2
20.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/20_2
21.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/21_2
14.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_4
15.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_4
16.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/16_4
17.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/17_4
18.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/18_4
19.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/19_4
20.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/20_4
21.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/21_4
14.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_6
15.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_6
16.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/16_6
17.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/17_6
18.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/18_6
19.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/19_6
20.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/20_6
21.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/21_6
14.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_8
15.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_8
16.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/16_8
17.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/17_8
18.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/18_8
19.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/19_8
20.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/20_8
21.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/21_8
Paradigm-Specific Parameters
Detected paradigm: ssvep
Stimulus frequencies: [14.0, 14.2, 14.4, 14.6, 14.8, 15.0, 15.2, 15.4, 15.6, 15.8, 16.0, 16.2, 16.4, 16.6, 16.8, 17.0, 17.2, 17.4, 17.6, 17.8, 18.0, 18.2, 18.4, 18.6, 18.8, 19.0, 19.2, 19.4, 19.6, 19.8, 20.0, 20.2, 20.4, 20.6, 20.8, 21.0, 21.2, 21.4, 21.6, 21.8] Hz
Frequency resolution: 0.2 Hz
Data Structure
Trials: 240
Blocks per session: 6
Preprocessing
Data state: epoched
Signal Processing
Classifiers: CCA, FBCCA, ITCCA, TRCA, EEGNet
Feature extraction: CCA, FBCCA, TRCA
Frequency bands: stimulus_range=[14.0, 22.0] Hz; analysis=[13.0, 89.0] Hz
Spatial filters: CCA, TRCA
Cross-Validation
Method: leave-one-subject-out
Folds: 6
Evaluation type: within_subject, cross_subject
BCI Application
Applications: speller
Environment: lab
Tags
Pathology: healthy
Modality: visual
Type: perception
Documentation
DOI: 10.1038/s41597-025-06032-2
License: CC BY 4.0
Investigators: Heegyu Kim, Kyungho Won, Minkyu Ahn, Sung Chan Jun
Senior author: Sung Chan Jun
Institution: Gwangju Institute of Science and Technology
Department: School of Electrical Engineering and Computer Science, GIST
Country: KR
Repository: Figshare
Publication year: 2025
Ethics approval: GIST IRB, No. 20211201-HR-64-02-04
Keywords: SSVEP, BCI, beta range, visual fatigue, 40-class speller, JFPM, EEG
References
H. Kim, K. Won, M. Ahn, and S. C. Jun, “A 40-class SSVEP speller dataset: beta range stimulation for low-fatigue BCI applications,” Scientific Data, vol. 12, p. 1751, 2025. DOI: 10.1038/s41597-025-06032-2 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb
Dataset Information#
Dataset ID |
|
Title |
Kim2025 – 40-class beta-range SSVEP speller dataset |
Author (year) |
|
Canonical |
|
Importable as |
|
Year |
2019 |
Authors |
Heegyu Kim, Kyungho Won, Minkyu Ahn, Sung Chan Jun |
License |
CC BY 4.0 |
Citation / DOI |
Unknown |
Source links |
OpenNeuro | NeMAR | Source URL |
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 40
Recordings: 240
Tasks: 1
Channels: 31
Sampling rate (Hz): 1024.0
Duration (hours): 18.927018229166663
Pathology: Healthy
Modality: Visual
Type: Perception
Size on disk: 8.1 GB
File count: 240
Format: BIDS
License: CC BY 4.0
DOI: —
API Reference#
Use the NM000127 class to access this dataset programmatically.
- class eegdash.dataset.NM000127(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetKim2025 – 40-class beta-range SSVEP speller dataset
- Study:
nm000127(NeMAR)- Author (year):
Kim2025_SSVEP- Canonical:
Kim2025
Also importable as:
NM000127,Kim2025_SSVEP,Kim2025.Modality:
eeg; Experiment type:Perception; Subject type:Healthy. Subjects: 40; recordings: 240; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/nm000127 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000127
Examples
>>> from eegdash.dataset import NM000127 >>> dataset = NM000127(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset