NM000266: eeg dataset, 13 subjects#
Sosulski2019
Access recordings and metadata through EEGDash.
Citation: Jan Sosulski, David Hübner, Aaron Klein, Michael Tangermann (2019). Sosulski2019. 10.48550/arXiv.2109.06011
Modality: eeg Subjects: 13 Recordings: 1060 License: CC-BY-SA-4.0 Source: nemar
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import NM000266
dataset = NM000266(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = NM000266(cache_dir="./data", subject="01")
Advanced query
dataset = NM000266(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{nm000266,
title = {Sosulski2019},
author = {Jan Sosulski and David Hübner and Aaron Klein and Michael Tangermann},
doi = {10.48550/arXiv.2109.06011},
url = {https://doi.org/10.48550/arXiv.2109.06011},
}
About This Dataset#
Sosulski2019
P300 dataset from initial spot study.
Dataset Overview
Code: Sosulski2019 Paradigm: p300 DOI: 10.6094/UNIFR/154576
View full README
Sosulski2019
P300 dataset from initial spot study.
Dataset Overview
Code: Sosulski2019 Paradigm: p300 DOI: 10.6094/UNIFR/154576 Subjects: 13 Sessions per subject: 80 Events: Target=21, NonTarget=1 Trial interval: [-0.2, 1] s File format: brainvision
Acquisition
Sampling rate: 1000.0 Hz Number of channels: 31 Channel types: eeg=31, eog=1, misc=5 Channel names: C3, C4, CP1, CP2, CP5, CP6, Cz, EOGvu, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, P10, P3, P4, P7, P8, P9, Pz, T7, T8, x_EMGl, x_GSR, x_Optic, x_Pulse, x_Respi Montage: standard_1020 Hardware: BrainProducts BrainAmp DC Reference: nose Sensor type: passive Ag/AgCl Line frequency: 50.0 Hz Auxiliary channels: EOG (1 ch, vertical)
Participants
Number of subjects: 13 Health status: healthy Age: mean=22.7, std=1.64, min=20, max=26 Gender distribution: male=5, female=8 Species: human
Experimental Protocol
Paradigm: p300 Number of classes: 2 Class labels: Target, NonTarget Study design: Subjects focused attention on target tones (1000 Hz) and ignored non-target tones (500 Hz) presented via speaker at 65 cm distance. One trial consisted of 15 target and 75 non-target stimuli in pseudo-random order with at least two non-target tones between target tones. The experiment was split into optimization and validation parts. Stimulus type: oddball Stimulus modalities: auditory Primary modality: auditory Synchronicity: synchronous Mode: online Instructions: Focus on the target tones (1000 Hz) and ignore the non-target tones (500 Hz). Refrain from blinking and movement as much as possible. Stimulus presentation: target_tone_hz=1000, non_target_tone_hz=500, tone_duration_ms=40, distance_cm=65
HED Event Annotations
Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser Target
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Target
NonTarget
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Non-target
Paradigm-Specific Parameters
Detected paradigm: p300 Number of targets: 1
Data Structure
Trials: Variable: optimization part used time-limited trials (20 minutes per strategy), validation part used 20 trials per SOA Trials per class: target=13 per trial (after preprocessing, originally 15), non_target=65 per trial (after preprocessing, originally 75) Trials context: Each trial consisted of 90 stimuli (15 target, 75 non-target). After preprocessing (removing first and last 6 epochs), 78 data points available per trial: 13 target and 65 non-target epochs.
Signal Processing
Classifiers: rLDA, Shrinkage LDA Feature extraction: Mean amplitude in time intervals Frequency bands: analyzed=[1.5, 40.0] Hz
Cross-Validation
Method: 13-fold Folds: 13 Evaluation type: within_session
Performance (Original Study)
Auc: 0.701 Mean Auc Ucb: 0.701 Mean Auc Rand: 0.704 Mean Auc P300 Ucb: 0.67 Mean Auc P300 Rand: 0.681 Mean Auc Fixed60: 0.517
BCI Application
Applications: communication Online feedback: False
Tags
Pathology: Healthy Modality: Auditory Type: Research
Documentation
Description: Auditory oddball ERP dataset from 13 healthy subjects. Two sinusoidal tones (target 1000 Hz, non-target 500 Hz) presented at various stimulus onset asynchronies (SOAs, 60-600 ms). 31-channel EEG recorded at 1000 Hz with BrainProducts BrainAmp DC. Raw BrainVision format data. DOI: 10.48550/arXiv.2109.06011 License: CC-BY-SA-4.0 Investigators: Jan Sosulski, David Hübner, Aaron Klein, Michael Tangermann Senior author: Michael Tangermann Contact: jan.sosulski@blbt.uni-freiburg.de; davhuebn@gmail.com; kleinaa@cs.uni-freiburg.de; michael.tangermann@donders.ru.nl Institution: University of Freiburg Country: DE Repository: FreiDok Data URL: https://freidok.uni-freiburg.de/data/154576 Publication year: 2021 Funding: Cluster of Excellence BrainLinks-BrainTools funded by the German Research Foundation (DFG) [grant number EXC 1086]; DFG project SuitAble [grant number TA 1258/1-1]; state of Baden-Württemberg, Germany, through bwHPC and the German Research Foundation (DFG) [grant number INST 39/963-1 FUGG] Ethics approval: Approved by the ethics committee of the university medical center of Freiburg Acknowledgements: Experiments were performed according to the Declaration of Helsinki. Keywords: Bayesian optimization, individual experimental parameters, brain-computer interfaces, learning from small data, auditory event-related potentials, closed-loop parameter optimization
Abstract
The decoding of brain signals recorded via, e.g., an electroencephalogram, using machine learning is key to brain-computer interfaces (BCIs). Stimulation parameters or other experimental settings of the BCI protocol typically are chosen according to the literature. The decoding performance directly depends on the choice of parameters, as they influence the elicited brain signals and optimal parameters are subject-dependent. Thus a fast and automated selection procedure for experimental parameters could greatly improve the usability of BCIs. We evaluate a standalone random search and a combined Bayesian optimization with random search into a closed-loop auditory event-related potential protocol. We aimed at finding the individually best stimulation speed—also known as stimulus onset asynchrony (SOA)—that maximizes the classification performance of a regularized linear discriminant analysis.
Methodology
The experiment was divided into two parts: (1) Optimization part: four strategies (AUC-ucb, AUC-rand, P300-ucb, P300-rand) each allocated 20 minutes to find optimal SOA. Strategies alternated to minimize non-stationarity effects. (2) Validation part: evaluated SOAs from each optimization strategy plus fixed 60ms SOA using 20 trials each (in blocks of 5 trials). Features were mean amplitudes in 5 time intervals ([100, 170], [171, 230], [231, 300], [301, 410], [411, 500] ms) across 31 channels (155 dimensions total). Classification used rLDA with automatic shrinkage regularization and 13-fold cross-validation on single trials.
References
Sosulski, J., Tangermann, M.: Electroencephalogram signals recorded from 13 healthy subjects during an auditory oddball paradigm under different stimulus onset asynchrony conditions. Dataset. DOI: 10.6094/UNIFR/154576 Sosulski, J., Tangermann, M.: Spatial filters for auditory evoked potentials transfer between different experimental conditions. Graz BCI Conference. 2019. Sosulski, J., Hübner, D., Klein, A., Tangermann, M.: Online Optimization of Stimulation Speed in an Auditory Brain-Computer Interface under Time Constraints. arXiv preprint. 2021. Notes .. versionadded:: 0.4.5 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb
Dataset Information#
Dataset ID |
|
Title |
Sosulski2019 |
Author (year) |
|
Canonical |
— |
Importable as |
|
Year |
2019 |
Authors |
Jan Sosulski, David Hübner, Aaron Klein, Michael Tangermann |
License |
CC-BY-SA-4.0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{nm000266,
title = {Sosulski2019},
author = {Jan Sosulski and David Hübner and Aaron Klein and Michael Tangermann},
doi = {10.48550/arXiv.2109.06011},
url = {https://doi.org/10.48550/arXiv.2109.06011},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 13
Recordings: 1060
Tasks: 1
Channels: 37
Sampling rate (Hz): 1000.0
Duration (hours): 9.793594444444444
Pathology: Healthy
Modality: Auditory
Type: Attention
Size on disk: 3.7 GB
File count: 1060
Format: BIDS
License: CC-BY-SA-4.0
DOI: doi:10.48550/arXiv.2109.06011
API Reference#
Use the NM000266 class to access this dataset programmatically.
- class eegdash.dataset.NM000266(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetSosulski2019
- Study:
nm000266(NeMAR)- Author (year):
Sosulski2019- Canonical:
—
Also importable as:
NM000266,Sosulski2019.Modality:
eeg; Experiment type:Attention; Subject type:Healthy. Subjects: 13; recordings: 1060; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/nm000266 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000266 DOI: https://doi.org/10.48550/arXiv.2109.06011
Examples
>>> from eegdash.dataset import NM000266 >>> dataset = NM000266(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset