NM000192: eeg dataset, 11 subjects#
BNCI 2015-006 Music BCI dataset
Access recordings and metadata through EEGDash.
Citation: M S Treder, H Purwins, D Miklody, I Sturm, B Blankertz (2014). BNCI 2015-006 Music BCI dataset.
Modality: eeg Subjects: 11 Recordings: 11 License: CC-BY-NC-ND-4.0 Source: nemar
Metadata: Complete (90%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import NM000192
dataset = NM000192(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = NM000192(cache_dir="./data", subject="01")
Advanced query
dataset = NM000192(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{nm000192,
title = {BNCI 2015-006 Music BCI dataset},
author = {M S Treder and H Purwins and D Miklody and I Sturm and B Blankertz},
}
About This Dataset#
BNCI 2015-006 Music BCI dataset
BNCI 2015-006 Music BCI dataset.
Dataset Overview
Code: BNCI2015-006
Paradigm: p300
DOI: 10.1088/1741-2560/11/2/026009
View full README
BNCI 2015-006 Music BCI dataset
BNCI 2015-006 Music BCI dataset.
Dataset Overview
Code: BNCI2015-006
Paradigm: p300
DOI: 10.1088/1741-2560/11/2/026009
Subjects: 11
Sessions per subject: 1
Events: Target=1, NonTarget=2
Trial interval: [0, 1.0] s
File format: gdf
Data preprocessed: True
Contributing labs: Neurotechnology Group TU Berlin, Bernstein Focus Neurotechnology, Aalborg University Copenhagen, Berlin School of Mind and Brain
Acquisition
Sampling rate: 200.0 Hz
Number of channels: 64
Channel types: eeg=64
Channel names: AF3, AF4, AF7, AF8, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EOGvu, F1, F10, F2, F3, F4, F5, F6, F7, F8, F9, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fz, O1, O2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8
Montage: 10-10
Hardware: Brain Products
Reference: left mastoid
Ground: forehead
Sensor type: active electrode
Line frequency: 50.0 Hz
Online filters: {‘bandpass’: [0.016, 250]}
Impedance threshold: 20.0 kOhm
Cap manufacturer: Brain Products
Cap model: actiCAP
Electrode type: active
Participants
Number of subjects: 11
Health status: patients
Clinical population: Healthy
Age: mean=28.0, min=21, max=50
Gender distribution: male=7, female=4
Handedness: all but one right-handed
BCI experience: naive
Species: human
Experimental Protocol
Paradigm: p300
Task type: auditory oddball
Number of classes: 2
Class labels: Target, NonTarget
Trial duration: 40.0 s
Tasks: selective auditory attention, deviant counting
Study design: Multi-streamed musical oddball paradigm with three concurrent instruments. Participants attended to one instrument and counted deviants while ignoring the other two instruments. Two music conditions tested: Synth-Pop (bass, drums, keyboard) and Jazz (double-bass, piano, flute).
Study domain: auditory BCI
Feedback type: none
Stimulus type: musical oddball
Stimulus modalities: visual, auditory
Primary modality: auditory
Synchronicity: asynchronous
Mode: offline
Training/test split: False
Instructions: Attend to cued instrument, count the number of deviants in that instrument, ignore other two instruments, maintain fixation on cross, minimize eye movements
Stimulus presentation: visual_cue=instrument indication, fixation_cross=continuous during music playback, music_clips=40-second polyphonic music
HED Event Annotations
Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser
Target
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Target
NonTarget
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Non-target
Paradigm-Specific Parameters
Detected paradigm: p300
Number of targets: 3
Data Structure
Trials: 3-7 deviants per instrument per clip
Blocks per session: 10
Trials context: per_instrument_per_clip
Preprocessing
Data state: epoched
Preprocessing applied: True
Steps: downsampling, lowpass filtering, epoching, baseline correction, artifact rejection
Lowpass filter: 42.0 Hz
Filter type: Chebyshev
Artifact methods: min-max criterion (100 μV threshold on Fp1 or Fp2)
Downsampled to: 250.0 Hz
Epoch window: [-0.2, 1.2]
Notes: Artifact rejection applied only to training set, preserved in test set. Passbands: 42 Hz, stopbands: 49 Hz for Chebyshev filter.
Signal Processing
Classifiers: LDA with shrinkage covariance
Feature extraction: spatio-temporal features, voltage averaging in time windows
Frequency bands: alpha=[8, 13] Hz
Cross-Validation
Method: leave-one-clip-out
Evaluation type: cross_trial
Performance (Original Study)
Accuracy: 91.0%
Binary Classifier Accuracy Synth Pop: 69.25
Binary Classifier Accuracy Jazz: 71.47
Posterior Probability Accuracy Synth Pop: 91.0
Posterior Probability Accuracy Jazz: 91.5
BCI Application
Applications: communication, speller, message selection
Environment: laboratory
Online feedback: False
Tags
Pathology: Healthy
Modality: Auditory
Type: Perception, Attention
Documentation
Description: Multi-streamed musical oddball paradigm for auditory BCI. Each of three concurrent instruments has its own standard and deviant patterns. Participants selectively attend to one instrument to detect deviants.
DOI: 10.1088/1741-2560/11/2/026009
Associated paper DOI: 10.1088/1741-2560/11/2/026009
License: CC-BY-NC-ND-4.0
Investigators: M S Treder, H Purwins, D Miklody, I Sturm, B Blankertz
Senior author: B Blankertz
Contact: matthias.treder@tu-berlin.de
Institution: Technische Universität Berlin
Department: Neurotechnology Group; Bernstein Focus: Neurotechnology
Address: Berlin, Germany
Country: Germany
Repository: GitHub
Data URL: https://github.com/bbci/bbci_public/blob/master/doc/index.markdown
Publication year: 2014
Funding: German Bundesministerium für Bildung und Forschung (Grant Nos. 16SV5839 and 01GQ0850)
Ethics approval: Declaration of Helsinki
Acknowledgements: We acknowledge financial support by the German Bundesministerium für Bildung und Forschung (Grant Nos. 16SV5839 and 01GQ0850).
Keywords: brain–computer interface, EEG, auditory, music, attention, oddball paradigm, P300
Abstract
Polyphonic music (music consisting of several instruments playing in parallel) is an intuitive way of embedding multiple information streams. The different instruments in a musical piece form concurrent information streams that seamlessly integrate into a coherent and hedonistically appealing entity. Here, we explore polyphonic music as a novel stimulation approach for use in a brain–computer interface. In a multi-streamed oddball experiment, we had participants shift selective attention to one out of three different instruments in music audio clips. Each instrument formed an oddball stream with its own specific standard stimuli (a repetitive musical pattern) and oddballs (deviating musical pattern). Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument can be classified offline with a mean accuracy of 91% across 11 participants. This is a proof of concept that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for both brain–computer interface and music research.
Methodology
Participants listened to 40-second polyphonic music clips with three concurrent instruments (Synth-Pop: bass, drums, keyboard; Jazz: double-bass, piano, flute). Each instrument had standard patterns and infrequent deviants (3-7 per clip). Participants were cued to attend to one instrument and count deviants. EEG recorded at 1000 Hz with 64 electrodes, downsampled to 250 Hz, lowpass filtered (Chebyshev, 42 Hz passband), epoched (-200 to 1200 ms), baseline corrected, and artifact rejected. Two classification approaches: (1) general binary classifier and (2) instrument-specific classifiers with posterior probabilities. Features: spatio-temporal (3 time intervals × 63 electrodes = 189 dimensions). LDA with shrinkage covariance. Leave-one-clip-out cross-validation. Main experiment: 10 blocks of 21 clips (7 clips per instrument as target). Total: 3 Synth-Pop mixed blocks, 3 Jazz mixed blocks, 2 Synth-Pop solo blocks, 2 Jazz solo blocks.
References
Treder, M. S., Purwins, H., Miklody, D., Sturm, I., & Blankertz, B. (2014). Decoding auditory attention to instruments in polyphonic music using single-trial EEG classification. Journal of Neural Engineering, 11(2), 026009. https://doi.org/10.1088/1741-2560/11/2/026009 Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb
Dataset Information#
Dataset ID |
|
Title |
BNCI 2015-006 Music BCI dataset |
Author (year) |
|
Canonical |
|
Importable as |
|
Year |
2014 |
Authors |
M S Treder, H Purwins, D Miklody, I Sturm, B Blankertz |
License |
CC-BY-NC-ND-4.0 |
Citation / DOI |
Unknown |
Source links |
OpenNeuro | NeMAR | Source URL |
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 11
Recordings: 11
Tasks: 1
Channels: 64
Sampling rate (Hz): 200.0
Duration (hours): 33.94770694444444
Pathology: Healthy
Modality: Auditory
Type: Attention
Size on disk: 4.4 GB
File count: 11
Format: BIDS
License: CC-BY-NC-ND-4.0
DOI: —
API Reference#
Use the NM000192 class to access this dataset programmatically.
- class eegdash.dataset.NM000192(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetBNCI 2015-006 Music BCI dataset
- Study:
nm000192(NeMAR)- Author (year):
Treder2015_BNCI_006_Music- Canonical:
BNCI2015_BNCI_006_Music,BNCI_2015_006_Music,BNCI2015_006_MusicBCI
Also importable as:
NM000192,Treder2015_BNCI_006_Music,BNCI2015_BNCI_006_Music,BNCI_2015_006_Music,BNCI2015_006_MusicBCI.Modality:
eeg; Experiment type:Attention; Subject type:Healthy. Subjects: 11; recordings: 11; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/nm000192 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000192
Examples
>>> from eegdash.dataset import NM000192 >>> dataset = NM000192(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset