NM000129: eeg dataset, 70 subjects#
Liu2020 – BETA SSVEP benchmark dataset
Access recordings and metadata through EEGDash.
Citation: Bingchuan Liu, Xiaoshan Huang, Yijun Wang, Xiaogang Chen, Xiaorong Gao (2019). Liu2020 – BETA SSVEP benchmark dataset.
Modality: eeg Subjects: 70 Recordings: 70 License: Non-commercial research use Source: nemar
Metadata: Complete (90%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import NM000129
dataset = NM000129(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = NM000129(cache_dir="./data", subject="01")
Advanced query
dataset = NM000129(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{nm000129,
title = {Liu2020 – BETA SSVEP benchmark dataset},
author = {Bingchuan Liu and Xiaoshan Huang and Yijun Wang and Xiaogang Chen and Xiaorong Gao},
}
About This Dataset#
BETA SSVEP benchmark dataset
BETA SSVEP benchmark dataset.
Dataset Overview
Code: Liu2020BETA
Paradigm: ssvep
DOI: 10.3389/fnins.2020.00627
View full README
BETA SSVEP benchmark dataset
BETA SSVEP benchmark dataset.
Dataset Overview
Code: Liu2020BETA
Paradigm: ssvep
DOI: 10.3389/fnins.2020.00627
Subjects: 70
Sessions per subject: 1
Events: 8.6=1, 8.8=2, 9=3, 9.2=4, 9.4=5, 9.6=6, 9.8=7, 10=8, 10.2=9, 10.4=10, 10.6=11, 10.8=12, 11=13, 11.2=14, 11.4=15, 11.6=16, 11.8=17, 12=18, 12.2=19, 12.4=20, 12.6=21, 12.8=22, 13=23, 13.2=24, 13.4=25, 13.6=26, 13.8=27, 14=28, 14.2=29, 14.4=30, 14.6=31, 14.8=32, 15=33, 15.2=34, 15.4=35, 15.6=36, 15.8=37, 8=38, 8.2=39, 8.4=40
Trial interval: [0, 3.0] s
File format: MAT
Acquisition
Sampling rate: 250.0 Hz
Number of channels: 64
Channel types: eeg=64
Channel names: Fp1, Fpz, Fp2, AF3, AF4, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, T7, C5, C3, C1, Cz, C2, C4, C6, T8, M1, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, M2, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO5, PO3, POz, PO4, PO6, PO8, CB1, O1, Oz, O2, CB2
Montage: standard_1005
Hardware: Synamps2 (Neuroscan)
Reference: Cz
Line frequency: 50.0 Hz
Impedance threshold: 10 kOhm
Participants
Number of subjects: 70
Health status: healthy
Age: mean=25.14, std=7.97, min=9, max=64
Gender distribution: male=42, female=28
BCI experience: mixed
Experimental Protocol
Paradigm: ssvep
Task type: cued-spelling
Number of classes: 40
Class labels: 8.6, 8.8, 9, 9.2, 9.4, 9.6, 9.8, 10, 10.2, 10.4, 10.6, 10.8, 11, 11.2, 11.4, 11.6, 11.8, 12, 12.2, 12.4, 12.6, 12.8, 13, 13.2, 13.4, 13.6, 13.8, 14, 14.2, 14.4, 14.6, 14.8, 15, 15.2, 15.4, 15.6, 15.8, 8, 8.2, 8.4
Trial duration: 3.0 s
Feedback type: visual
Stimulus type: JFPM visual flicker
Stimulus modalities: visual
Primary modality: visual
Synchronicity: synchronous
Mode: offline
Training/test split: False
HED Event Annotations
Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser
8.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/8_6
8.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/8_8
9
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/9
9.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/9_2
9.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/9_4
9.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/9_6
9.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/9_8
10
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/10
10.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/10_2
10.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/10_4
10.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/10_6
10.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/10_8
11
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/11
11.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/11_2
11.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/11_4
11.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/11_6
11.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/11_8
12
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/12
12.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/12_2
12.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/12_4
12.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/12_6
12.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/12_8
13
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/13
13.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/13_2
13.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/13_4
13.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/13_6
13.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/13_8
14
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14
14.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_2
14.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_4
14.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_6
14.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_8
15
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15
15.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_2
15.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_4
15.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_6
15.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_8
8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/8
8.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/8_2
8.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/8_4
Paradigm-Specific Parameters
Detected paradigm: ssvep
Stimulus frequencies: [8.0, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2, 9.4, 9.6, 9.8, 10.0, 10.2, 10.4, 10.6, 10.8, 11.0, 11.2, 11.4, 11.6, 11.8, 12.0, 12.2, 12.4, 12.600000000000001, 12.8, 13.0, 13.2, 13.4, 13.600000000000001, 13.8, 14.0, 14.2, 14.4, 14.600000000000001, 14.8, 15.0, 15.2, 15.4, 15.600000000000001, 15.8] Hz
Frequency resolution: 0.2 Hz
Data Structure
Trials: 160
Blocks per session: 4
Preprocessing
Data state: epoched
Notch filter: 50 Hz
Filter type: zero-phase FIR
Downsampled to: 250.0 Hz
Signal Processing
Classifiers: TRCA, msTRCA, FBCCA, CCA
Feature extraction: CCA, TRCA, FBCCA
Frequency bands: bandpass=[3.0, 100.0] Hz
Spatial filters: CCA, TRCA
Cross-Validation
Method: leave-one-block-out
Folds: 4
Evaluation type: within_subject
BCI Application
Applications: speller
Environment: classroom
Online feedback: True
Tags
Pathology: healthy
Modality: visual
Type: perception
Documentation
DOI: 10.3389/fnins.2020.00627
License: Non-commercial research use
Investigators: Bingchuan Liu, Xiaoshan Huang, Yijun Wang, Xiaogang Chen, Xiaorong Gao
Senior author: Xiaorong Gao
Institution: Tsinghua University
Department: Department of Biomedical Engineering, Tsinghua University
Country: CN
Repository: Tsinghua BCI Lab
Data URL: http://bci.med.tsinghua.edu.cn/upload/liubingchuan/
Publication year: 2020
Funding: National Key Research and Development Program of China (No. 2017YFB1002505); Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32040200); Key Research and Development Program of Guangdong Province (No. 2018B030339001); National Natural Science Foundation of China (Grant No. 61431007)
Ethics approval: Ethics Committee of Tsinghua University, No. 20190002
Keywords: SSVEP, BCI, EEG, benchmark, JFPM
References
B. Liu, X. Huang, Y. Wang, X. Chen, and X. Gao, “BETA: A Large Benchmark Database Toward SSVEP-BCI Application,” Frontiers in Neuroscience, vol. 14, p. 627, 2020. DOI: 10.3389/fnins.2020.00627 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb
Dataset Information#
Dataset ID |
|
Title |
Liu2020 – BETA SSVEP benchmark dataset |
Author (year) |
|
Canonical |
|
Importable as |
|
Year |
2019 |
Authors |
Bingchuan Liu, Xiaoshan Huang, Yijun Wang, Xiaogang Chen, Xiaorong Gao |
License |
Non-commercial research use |
Citation / DOI |
Unknown |
Source links |
OpenNeuro | NeMAR | Source URL |
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 70
Recordings: 70
Tasks: 1
Channels: 64
Sampling rate (Hz): 250.0
Duration (hours): 13.022144444444445
Pathology: Healthy
Modality: Visual
Type: Perception
Size on disk: 2.8 GB
File count: 70
Format: BIDS
License: Non-commercial research use
DOI: —
API Reference#
Use the NM000129 class to access this dataset programmatically.
- class eegdash.dataset.NM000129(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetLiu2020 – BETA SSVEP benchmark dataset
- Study:
nm000129(NeMAR)- Author (year):
Liu2020- Canonical:
BetaSSVEP,BETA_SSVEP,BETA
Also importable as:
NM000129,Liu2020,BetaSSVEP,BETA_SSVEP,BETA.Modality:
eeg; Experiment type:Perception; Subject type:Healthy. Subjects: 70; recordings: 70; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/nm000129 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000129
Examples
>>> from eegdash.dataset import NM000129 >>> dataset = NM000129(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset