NM000234: eeg dataset, 21 subjects#
BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset
Access recordings and metadata through EEGDash.
Citation: Martijn Schreuder, Benjamin Blankertz, Michael Tangermann (2011). BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset.
Modality: eeg Subjects: 21 Recordings: 42 License: CC-BY-NC-ND-4.0 Source: nemar
Metadata: Complete (90%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import NM000234
dataset = NM000234(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = NM000234(cache_dir="./data", subject="01")
Advanced query
dataset = NM000234(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{nm000234,
title = {BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset},
author = {Martijn Schreuder and Benjamin Blankertz and Michael Tangermann},
}
About This Dataset#
BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset
BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset.
Dataset Overview
Code: BNCI2015-009
Paradigm: p300
DOI: 10.3389/fnins.2011.00112
View full README
BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset
BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset.
Dataset Overview
Code: BNCI2015-009
Paradigm: p300
DOI: 10.3389/fnins.2011.00112
Subjects: 21
Sessions per subject: 1
Events: Target=1, NonTarget=2
Trial interval: [0, 0.8] s
Runs per session: 2
File format: gdf
Data preprocessed: True
Acquisition
Sampling rate: 250.0 Hz
Number of channels: 60
Channel types: eeg=60, eog=2
Montage: 10-20
Hardware: Brain Products 128-channel amplifier
Software: Matlab
Reference: nose
Sensor type: Ag/AgCl electrodes
Line frequency: 50.0 Hz
Online filters: 0.1-250 Hz analog bandpass
Auxiliary channels: EOG (2 ch, bipolar)
Participants
Number of subjects: 21
Health status: patients
Clinical population: Healthy
Age: mean=30.3, min=22, max=55
Gender distribution: male=6, female=4
Handedness: unknown
BCI experience: mixed
Species: human
Experimental Protocol
Paradigm: p300
Task type: oddball
Number of classes: 2
Class labels: Target, NonTarget
Trial duration: 0.8 s
Tasks: spatial_auditory_oddball
Study design: Offline auditory oddball task using spatial location of auditory stimuli as discriminating cue. Frontal five speakers used (speakers 1,2,3,7,8) with 45 degree spacing. Three conditions tested: C300 (300ms ISI), C175 (175ms ISI), C300s (300ms ISI, single speaker). Each stimulus was unique 40ms complex sound from bandpass filtered white noise with tone overlay.
Study domain: BCI
Feedback type: none
Stimulus type: auditory_spatial
Stimulus modalities: auditory
Primary modality: auditory
Synchronicity: synchronous
Mode: offline
Training/test split: False
Instructions: Subjects asked to mentally count target stimulations or respond by keypress (condition Cr). Minimize eye movements and muscle contractions. Target direction indicated prior to each block visually and by presenting stimulus from that location.
HED Event Annotations
Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser
Target
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Target
NonTarget
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Non-target
Paradigm-Specific Parameters
Detected paradigm: p300
Number of targets: 5
Number of repetitions: 15
Inter-stimulus interval: 300.0 ms
Data Structure
Trials: varied by condition
Blocks per session: 50
Trials context: BCI experiments: C300 (50 trials × 75 subtrials = 3750 subtrials), C175 (40 trials × 75 subtrials = 3000 subtrials), C300s (20 trials × 75 subtrials = 1500 subtrials). Physiological experiments: C1000 (32 trials × 80 subtrials = 2560 subtrials), Cr (576-768 subtrials)
Preprocessing
Data state: filtered
Preprocessing applied: True
Steps: bandpass filter, notch filter, downsampling, artifact rejection
Highpass filter: 0.1 Hz
Lowpass filter: 250.0 Hz
Bandpass filter: {‘low_cutoff_hz’: 0.1, ‘high_cutoff_hz’: 250.0}
Notch filter: [50] Hz
Filter type: Chebyshev II order 8 (for visual inspection: 30 Hz pass, 42 Hz stop, 50 dB damping)
Artifact methods: threshold-based artifact rejection
Re-reference: nose
Downsampled to: 100.0 Hz
Epoch window: [-0.15, 0.8]
Notes: Raw data acquired at 1000 Hz. For visual inspection: low-pass filtered with order 8 Chebyshev II filter (30 Hz pass, 42 Hz stop, 50 dB damping) applied forward and backward to minimize phase shifts, then downsampled to 100 Hz. For classification: same filter applied causally (forward only) for online portability. Artifact rejection used simple threshold method: subtrials with deflection >70 µV over ocular channels compared to baseline were rejected.
Signal Processing
Classifiers: LDA
Feature extraction: ROC-separability-index
Frequency bands: analyzed=[0.1, 250.0] Hz
Cross-Validation
Method: cross-validation
Evaluation type: offline
Performance (Original Study)
Accuracy: 90.0%
Itr: 17.39 bits/min
Best Subject Itr: 25.2
Best Subject Accuracy: 100.0
C300S Accuracy: 70.0
BCI Application
Applications: speller, communication
Environment: laboratory
Online feedback: False
Tags
Pathology: Healthy
Modality: Auditory
Type: P300
Documentation
Description: A new auditory multi-class brain-computer interface paradigm using spatial hearing as an informative cue
DOI: 10.1371/journal.pone.0009813
Associated paper DOI: 10.3389/fnins.2011.00112
License: CC-BY-NC-ND-4.0
Investigators: Martijn Schreuder, Benjamin Blankertz, Michael Tangermann
Senior author: Michael Tangermann
Contact: martijn@cs.tu-berlin.de
Institution: Berlin Institute of Technology
Department: Machine Learning Department
Address: Berlin, Germany
Country: Germany
Repository: BNCI Horizon
Publication year: 2010
Funding: European ICT Programme Project FP7-224631; European ICT Programme Project FP7-216886; Deutsche Forschungsgemeinschaft (DFG) MU 987/3-1; Bundesministerium für Bildung und Forschung (BMBF) FKZ 01IB001A; Bundesministerium für Bildung und Forschung (BMBF) FKZ 01GQ0850; FP7-ICT PASCAL2 Network of Excellence ICT-216886
Ethics approval: Ethics Committee of the Charité University Hospital (number EA4/073/09)
Keywords: auditory BCI, P300, spatial hearing, multi-class, oddball paradigm
References
Schreuder, M., Rost, T., & Tangermann, M. (2011). Listen, you are writing! Speeding up online spelling with a dynamic auditory BCI. Frontiers in neuroscience, 5, 112. https://doi.org/10.3389/fnins.2011.00112 Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb
Dataset Information#
Dataset ID |
|
Title |
BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset |
Author (year) |
|
Canonical |
|
Importable as |
|
Year |
2011 |
Authors |
Martijn Schreuder, Benjamin Blankertz, Michael Tangermann |
License |
CC-BY-NC-ND-4.0 |
Citation / DOI |
Unknown |
Source links |
OpenNeuro | NeMAR | Source URL |
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 21
Recordings: 42
Tasks: 1
Channels: 60
Sampling rate (Hz): 250.0
Duration (hours): 30.17912
Pathology: Healthy
Modality: Auditory
Type: Attention
Size on disk: 4.6 GB
File count: 42
Format: BIDS
License: CC-BY-NC-ND-4.0
DOI: —
API Reference#
Use the NM000234 class to access this dataset programmatically.
- class eegdash.dataset.NM000234(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetBNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset
- Study:
nm000234(NeMAR)- Author (year):
Schreuder2015_ERP- Canonical:
BNCI2015_ERP
Also importable as:
NM000234,Schreuder2015_ERP,BNCI2015_ERP.Modality:
eeg; Experiment type:Attention; Subject type:Healthy. Subjects: 21; recordings: 42; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/nm000234 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000234
Examples
>>> from eegdash.dataset import NM000234 >>> dataset = NM000234(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset