NM000126: eeg dataset, 34 subjects#
Wang2016 – SSVEP Wang 2016 dataset
Access recordings and metadata through EEGDash.
Citation: Yijun Wang, Xiaogang Chen, Xiaorong Gao, Shangkai Gao (2016). Wang2016 – SSVEP Wang 2016 dataset.
Modality: eeg Subjects: 34 Recordings: 34 License: CC-BY-4.0 Source: nemar
Metadata: Complete (90%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import NM000126
dataset = NM000126(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = NM000126(cache_dir="./data", subject="01")
Advanced query
dataset = NM000126(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{nm000126,
title = {Wang2016 – SSVEP Wang 2016 dataset},
author = {Yijun Wang and Xiaogang Chen and Xiaorong Gao and Shangkai Gao},
}
About This Dataset#
SSVEP Wang 2016 dataset
SSVEP Wang 2016 dataset.
Dataset Overview
Code: Wang2016
Paradigm: ssvep
DOI: 10.1109/TNSRE.2016.2627556
View full README
SSVEP Wang 2016 dataset
SSVEP Wang 2016 dataset.
Dataset Overview
Code: Wang2016
Paradigm: ssvep
DOI: 10.1109/TNSRE.2016.2627556
Subjects: 34
Sessions per subject: 1
Events: 8=1, 9=2, 10=3, 11=4, 12=5, 13=6, 14=7, 15=8, 8.2=9, 9.2=10, 10.2=11, 11.2=12, 12.2=13, 13.2=14, 14.2=15, 15.2=16, 8.4=17, 9.4=18, 10.4=19, 11.4=20, 12.4=21, 13.4=22, 14.4=23, 15.4=24, 8.6=25, 9.6=26, 10.6=27, 11.6=28, 12.6=29, 13.6=30, 14.6=31, 15.6=32, 8.8=33, 9.8=34, 10.8=35, 11.8=36, 12.8=37, 13.8=38, 14.8=39, 15.8=40
Trial interval: [0.5, 5.5] s
File format: MATLAB MAT
Data preprocessed: True
Acquisition
Sampling rate: 250.0 Hz
Number of channels: 64
Channel types: eeg=64
Channel names: AF3, AF4, C1, C2, C3, C4, C5, C6, CB1, CB2, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, M1, M2, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO5, PO6, PO7, PO8, POz, Pz, T7, T8, TP7, TP8
Montage: standard_1005
Hardware: Synamps2 EEG system (Neuroscan, Inc.)
Reference: Cz
Line frequency: 50.0 Hz
Online filters: {‘bandpass’: [0.15, 200], ‘notch’: 50}
Impedance threshold: 10 kOhm
Participants
Number of subjects: 34
Health status: healthy
Age: mean=22.0, min=17, max=34
Gender distribution: female=17, male=18
BCI experience: 8 experienced, 27 naïve
Species: human
Experimental Protocol
Paradigm: ssvep
Number of classes: 40
Class labels: 8, 9, 10, 11, 12, 13, 14, 15, 8.2, 9.2, 10.2, 11.2, 12.2, 13.2, 14.2, 15.2, 8.4, 9.4, 10.4, 11.4, 12.4, 13.4, 14.4, 15.4, 8.6, 9.6, 10.6, 11.6, 12.6, 13.6, 14.6, 15.6, 8.8, 9.8, 10.8, 11.8, 12.8, 13.8, 14.8, 15.8
Trial duration: 6.0 s
Study design: Cue-guided target selecting task using a 40-target BCI speller with joint frequency and phase modulation (JFPM) approach
Stimulus type: visual flicker
Stimulus modalities: visual
Primary modality: visual
Synchronicity: synchronous
Mode: offline
Instructions: Subjects were asked to shift their gaze to the target as soon as possible after cue and avoid eye blinks during the 5-s stimulation duration
Stimulus presentation: SoftwareName=MATLAB Psychophysics Toolbox Ver. 3 (PTB-3), display=23.6-in LCD monitor (Acer GD245 HQ, response time: 2 ms), resolution=1920 × 1080 pixels at 60 Hz, viewing_distance=70 cm, stimulus_size=140 × 140 pixels (3.2° × 3.2°), character_size=32 × 32 pixels (0.7° × 0.7°), matrix_size=1510 × 1037 pixels (34° × 24°), matrix_layout=5 × 8 stimulus matrix, inter_stimulus_distance=50 pixels vertical and horizontal, method=sampled sinusoidal stimulation
HED Event Annotations
Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser
8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/8
9
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/9
10
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/10
11
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/11
12
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/12
13
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/13
14
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14
15
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15
8.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/8_2
9.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/9_2
10.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/10_2
11.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/11_2
12.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/12_2
13.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/13_2
14.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_2
15.2
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_2
8.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/8_4
9.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/9_4
10.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/10_4
11.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/11_4
12.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/12_4
13.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/13_4
14.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_4
15.4
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_4
8.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/8_6
9.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/9_6
10.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/10_6
11.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/11_6
12.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/12_6
13.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/13_6
14.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_6
15.6
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_6
8.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/8_8
9.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/9_8
10.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/10_8
11.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/11_8
12.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/12_8
13.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/13_8
14.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/14_8
15.8
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/15_8
Paradigm-Specific Parameters
Detected paradigm: ssvep
Stimulus frequencies: [8.0, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2, 9.4, 9.6, 9.8, 10.0, 10.2, 10.4, 10.6, 10.8, 11.0, 11.2, 11.4, 11.6, 11.8, 12.0, 12.2, 12.4, 12.6, 12.8, 13.0, 13.2, 13.4, 13.6, 13.8, 14.0, 14.2, 14.4, 14.6, 14.8, 15.0, 15.2, 15.4, 15.6, 15.8] Hz
Frequency resolution: 0.2 Hz
Number of targets: 40
Number of repetitions: 6
Cue duration: 0.5 s
Data Structure
Trials: 240
Trials per class: per_target=6
Blocks per session: 6
Trials context: 40 trials per block corresponding to all 40 characters in random order
Preprocessing
Data state: Raw epochs extracted from continuous EEG recordings according to stimulus onsets, downsampled to 250 Hz, no digital filters applied
Preprocessing applied: True
Steps: Epoch extraction according to stimulus onsets from event channel, Downsampling from 1000 Hz to 250 Hz, No digital filters applied in preprocessing
Downsampled to: 250.0 Hz
Epoch window: [-0.5, 5.5]
Notes: Data epochs include 0.5 s before stimulus onset, 5 s for stimulation, and 0.5 s after stimulus offset. Upper bound frequency of SSVEP harmonics is around 90 Hz.
Signal Processing
Classifiers: CCA, FBCCA
Feature extraction: Canonical Correlation Analysis, Filter Bank CCA
Frequency bands: analyzed=[7.0, 90.0] Hz
Cross-Validation
Method: leave-one-out (on six blocks)
Folds: 6
Evaluation type: within_subject
Performance (Original Study)
Itr: 117.75 bits/min
Peak Itr Fbcca 0.55S Gaze: 117.75
Peak Itr Fbcca 2S Gaze: 68.99
Peak Itr Cca 0.55S Gaze: 89.89
Peak Itr Cca 2S Gaze: 56.03
Visual Latency Ms: 136.91
Visual Latency Std Ms: 18.4
BCI Application
Applications: speller
Environment: dimly lit soundproof room
Online feedback: False
Tags
Pathology: Healthy
Modality: Visual
Type: Perception
Documentation
Description: A benchmark SSVEP dataset acquired with a 40-target BCI speller using joint frequency and phase modulation (JFPM) approach
DOI: 10.1109/TNSRE.2016.2627556
License: CC-BY-4.0
Investigators: Yijun Wang, Xiaogang Chen, Xiaorong Gao, Shangkai Gao
Senior author: Shangkai Gao
Contact: wangyj@semi.ac.cn; chenxg@bme.cams.cn; gxrdea@tsinghua.edu.cn; gsk-dea@tsinghua.edu.cn
Institution: Tsinghua University
Department: Department of Biomedical Engineering, Tsinghua University
Address: Beijing, China
Country: CN
Repository: BNCI Horizon 2020
Publication year: 2016
Funding: National Natural Science Foundation of China (No. 61431007, No. 91220301, and No. 91320202); National High-tech R&D Program (863) of China (No. 2012AA011601); Recruitment Program for Young Professionals; Young Talents Lift Project of Chinese Association of Science and Technology; PUMC Youth Fund (No. 3332016101)
Ethics approval: Research Ethics Committee of Tsinghua University
Keywords: Brain–computer interface (BCI), electroencephalogram (EEG), joint frequency and phase modulation (JFPM), public data set, steady-state visual evoked potential (SSVEP)
External Links
Bnci Horizon: https://bnci-horizon-2020.eu/database/data-sets
Abstract
This paper presents a benchmark steady-state visual evoked potential (SSVEP) dataset acquired with a 40-target brain–computer interface (BCI) speller. The dataset consists of 64-channel Electroencephalogram (EEG) data from 35 healthy subjects (8 experienced and 27 naïve) while they performed a cue-guided target selecting task. The virtual keyboard of the speller was composed of 40 visual flickers, which were coded using a joint frequency and phase modulation (JFPM) approach. The stimulation frequencies ranged from 8 Hz to 15.8 Hz with an interval of 0.2 Hz. The phase difference between two adjacent frequencies was 0.5π. For each subject, the data included six blocks of 40 trials corresponding to all 40 flickers indicated by a visual cue in a random order. The stimulation duration in each trial was five seconds.
Methodology
The study used a cue-guided target selecting task with a 40-target BCI speller. Stimuli were presented on a 23.6-in LCD monitor at 60 Hz using sampled sinusoidal stimulation method. Each trial started with a 0.5-s target cue, followed by 5 s of concurrent flickering of all stimuli, and ended with 0.5 s blank screen. The experiment included six blocks per subject, with 40 trials per block in random order. EEG data were recorded using Synamps2 system at 1000 Hz with 64 electrodes, referenced to Cz. Data were preprocessed by extracting epochs according to stimulus onsets and downsampling to 250 Hz. The JFPM approach encoded 40 characters using frequencies from 8-15.8 Hz (0.2 Hz interval) and phases from 0 to 19.5π (0.5π interval). Performance was evaluated using CCA and FBCCA methods with leave-one-out cross-validation.
References
Wang, Y., Chen, X., Gao, X., & Gao, S. (2016). A benchmark dataset for SSVEP-based brain–computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(10), 1746-1752. doi: 10.1109/TNSRE.2016.2627556. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb
Dataset Information#
Dataset ID |
|
Title |
Wang2016 – SSVEP Wang 2016 dataset |
Author (year) |
|
Canonical |
— |
Importable as |
|
Year |
2016 |
Authors |
Yijun Wang, Xiaogang Chen, Xiaorong Gao, Shangkai Gao |
License |
CC-BY-4.0 |
Citation / DOI |
Unknown |
Source links |
OpenNeuro | NeMAR | Source URL |
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 34
Recordings: 34
Tasks: 1
Channels: 64
Sampling rate (Hz): 250.0
Duration (hours): 14.506628888888889
Pathology: Healthy
Modality: Visual
Type: Perception
Size on disk: 3.1 GB
File count: 34
Format: BIDS
License: CC-BY-4.0
DOI: —
API Reference#
Use the NM000126 class to access this dataset programmatically.
- class eegdash.dataset.NM000126(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetWang2016 – SSVEP Wang 2016 dataset
- Study:
nm000126(NeMAR)- Author (year):
Wang2016- Canonical:
—
Also importable as:
NM000126,Wang2016.Modality:
eeg; Experiment type:Perception; Subject type:Healthy. Subjects: 34; recordings: 34; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/nm000126 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000126
Examples
>>> from eegdash.dataset import NM000126 >>> dataset = NM000126(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset