NM000128: eeg dataset, 59 subjects#

Dong2023 – 59-subject 40-class SSVEP dataset

Access recordings and metadata through EEGDash.

Citation: Yue Dong, Sen Tian (2019). Dong2023 – 59-subject 40-class SSVEP dataset.

Modality: eeg Subjects: 59 Recordings: 59 License: CC BY-NC 4.0 Source: nemar

Metadata: Complete (90%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import NM000128

dataset = NM000128(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = NM000128(cache_dir="./data", subject="01")

Advanced query

dataset = NM000128(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{nm000128,
  title = {Dong2023 – 59-subject 40-class SSVEP dataset},
  author = {Yue Dong and Sen Tian},
}

About This Dataset#

59-subject 40-class SSVEP dataset

59-subject 40-class SSVEP dataset.

Dataset Overview

  • Code: Dong2023

  • Paradigm: ssvep

  • DOI: 10.26599/BSA.2023.9050020

View full README

59-subject 40-class SSVEP dataset

59-subject 40-class SSVEP dataset.

Dataset Overview

  • Code: Dong2023

  • Paradigm: ssvep

  • DOI: 10.26599/BSA.2023.9050020

  • Subjects: 59

  • Sessions per subject: 1

  • Events: 8=1, 8.2=2, 8.4=3, 8.6=4, 8.8=5, 9=6, 9.2=7, 9.4=8, 9.6=9, 9.8=10, 10=11, 10.2=12, 10.4=13, 10.6=14, 10.8=15, 11=16, 11.2=17, 11.4=18, 11.6=19, 11.8=20, 12=21, 12.2=22, 12.4=23, 12.6=24, 12.8=25, 13=26, 13.2=27, 13.4=28, 13.6=29, 13.8=30, 14=31, 14.2=32, 14.4=33, 14.6=34, 14.8=35, 15=36, 15.2=37, 15.4=38, 15.6=39, 15.8=40

  • Trial interval: [0.5, 4.5] s

  • File format: MAT

Acquisition

  • Sampling rate: 250.0 Hz

  • Number of channels: 8

  • Channel types: eeg=8

  • Channel names: POz, PO3, PO4, PO7, PO8, Oz, O1, O2

  • Montage: standard_1005

  • Hardware: NeuSenW (Neuracle)

  • Reference: Fp1

  • Ground: Fp2

  • Sensor type: semi-dry (pre-gelled)

  • Line frequency: 50.0 Hz

Participants

  • Number of subjects: 59

  • Health status: healthy

  • Age: mean=12.4, min=10, max=16

  • Gender distribution: male=37, female=22

Experimental Protocol

  • Paradigm: ssvep

  • Task type: SSVEP speller

  • Number of classes: 40

  • Class labels: 8, 8.2, 8.4, 8.6, 8.8, 9, 9.2, 9.4, 9.6, 9.8, 10, 10.2, 10.4, 10.6, 10.8, 11, 11.2, 11.4, 11.6, 11.8, 12, 12.2, 12.4, 12.6, 12.8, 13, 13.2, 13.4, 13.6, 13.8, 14, 14.2, 14.4, 14.6, 14.8, 15, 15.2, 15.4, 15.6, 15.8

  • Trial duration: 4.0 s

  • Feedback type: visual

  • Stimulus type: JFPM visual flicker

  • Stimulus modalities: visual

  • Primary modality: visual

  • Synchronicity: synchronous

  • Mode: offline

  • Training/test split: False

HED Event Annotations

Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser

8
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/8

8.2
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/8_2

8.4
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/8_4

8.6
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/8_6

8.8
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/8_8

9
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/9

9.2
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/9_2

9.4
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/9_4

9.6
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/9_6

9.8
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/9_8

10
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/10

10.2
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/10_2

10.4
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/10_4

10.6
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/10_6

10.8
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/10_8

11
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/11

11.2
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/11_2

11.4
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/11_4

11.6
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/11_6

11.8
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/11_8

12
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/12

12.2
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/12_2

12.4
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/12_4

12.6
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/12_6

12.8
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/12_8

13
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/13

13.2
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/13_2

13.4
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/13_4

13.6
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/13_6

13.8
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/13_8

14
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/14

14.2
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/14_2

14.4
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/14_4

14.6
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/14_6

14.8
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/14_8

15
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/15

15.2
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/15_2

15.4
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/15_4

15.6
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/15_6

15.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_8

Paradigm-Specific Parameters

  • Detected paradigm: ssvep

  • Stimulus frequencies: [8.0, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2, 9.4, 9.6, 9.8, 10.0, 10.2, 10.4, 10.6, 10.8, 11.0, 11.2, 11.4, 11.6, 11.8, 12.0, 12.2, 12.4, 12.600000000000001, 12.8, 13.0, 13.2, 13.4, 13.600000000000001, 13.8, 14.0, 14.2, 14.4, 14.600000000000001, 14.8, 15.0, 15.2, 15.4, 15.600000000000001, 15.8] Hz

  • Frequency resolution: 0.2 Hz

Data Structure

  • Trials: 160

  • Blocks per session: 4

Preprocessing

  • Data state: epoched

  • Downsampled to: 250.0 Hz

Signal Processing

  • Classifiers: FBCCA, eTRCA, msTRCA

  • Spatial filters: CCA, TRCA

Cross-Validation

  • Method: leave-one-block-out

  • Folds: 4

  • Evaluation type: within_subject

BCI Application

  • Environment: non-shielded

  • Online feedback: True

Tags

  • Pathology: healthy

  • Modality: visual

  • Type: perception

Documentation

  • DOI: 10.26599/BSA.2023.9050020

  • License: CC BY-NC 4.0

  • Investigators: Yue Dong, Sen Tian

  • Senior author: Yue Dong

  • Institution: Jiangsu JITRI Brain Machine Fusion Intelligence Institute

  • Country: CN

  • Repository: Zenodo

  • Data URL: https://zenodo.org/records/18847318

  • Publication year: 2023

References

Y. Dong and S. Tian, “A large database towards user-friendly SSVEP-based BCI,” Brain Science Advances, vol. 9, no. 4, pp. 297-309, 2023. DOI: 10.26599/BSA.2023.9050020 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb

Dataset Information#

Dataset ID

NM000128

Title

Dong2023 – 59-subject 40-class SSVEP dataset

Author (year)

Dong2023

Canonical

Importable as

NM000128, Dong2023

Year

2019

Authors

Yue Dong, Sen Tian

License

CC BY-NC 4.0

Citation / DOI

Unknown

Source links

OpenNeuro | NeMAR | Source URL

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 59

  • Recordings: 59

  • Tasks: 1

Channels & sampling rate
  • Channels: 8

  • Sampling rate (Hz): 250.0

  • Duration (hours): 14.159934444444444

Tags
  • Pathology: Healthy

  • Modality: Visual

  • Type: Perception

Files & format
  • Size on disk: 397.1 MB

  • File count: 59

  • Format: BIDS

License & citation
  • License: CC BY-NC 4.0

  • DOI: —

Provenance

API Reference#

Use the NM000128 class to access this dataset programmatically.

class eegdash.dataset.NM000128(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

Dong2023 – 59-subject 40-class SSVEP dataset

Study:

nm000128 (NeMAR)

Author (year):

Dong2023

Canonical:

Also importable as: NM000128, Dong2023.

Modality: eeg; Experiment type: Perception; Subject type: Healthy. Subjects: 59; recordings: 59; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/nm000128 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000128

Examples

>>> from eegdash.dataset import NM000128
>>> dataset = NM000128(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#