DS006434#
The auditory brainstem response to natural speech is not affected by selective attention
Access recordings and metadata through EEGDash.
Citation: Thomas J Stoll, Nathan D Vandjelovic, Melissa J Polonenko, Nadja R S Li, Adrian K C Lee, Ross K Maddox (2025). The auditory brainstem response to natural speech is not affected by selective attention. 10.18112/openneuro.ds006434.v1.2.0
Modality: eeg Subjects: 66 Recordings: 898 License: CC0 Source: openneuro
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS006434
dataset = DS006434(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS006434(cache_dir="./data", subject="01")
Advanced query
dataset = DS006434(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds006434,
title = {The auditory brainstem response to natural speech is not affected by selective attention},
author = {Thomas J Stoll and Nathan D Vandjelovic and Melissa J Polonenko and Nadja R S Li and Adrian K C Lee and Ross K Maddox},
doi = {10.18112/openneuro.ds006434.v1.2.0},
url = {https://doi.org/10.18112/openneuro.ds006434.v1.2.0},
}
About This Dataset#
Overview
This is the dataset for our study investigating the effects of selective attention to speech stimuli in the subcortex and cortex, entitled “The auditory brainstem response to natural speech is not affected by selective attention” by Stoll et al. (2025). Please cite our paper if you use our dataset.
It contains EEG data for three experiments, detailed in the paper and
View full README
Overview
This is the dataset for our study investigating the effects of selective attention to speech stimuli in the subcortex and cortex, entitled “The auditory brainstem response to natural speech is not affected by selective attention” by Stoll et al. (2025). Please cite our paper if you use our dataset.
It contains EEG data for three experiments, detailed in the paper and briefly summarized below. Code and stimuli to derive the responses are provided in the Dataset folder and on our lab’s github: maddoxlab/stoll_et_al_selective_attention.
Experiment 1 - diotic stimuli (exp1Diotic)
This “task” includes EEG data for 28 subjects who listened to 120 trials
each (64 s each; total 128 minutes) of two audiobooks - A Wrinkle in Time
(Female narrator) and The Alchemyst (male narrator). Stimuli were set to
65 dB SPL then summed together to be presented diotically.
Subjects sat at a computer desk in a soundproof room.
They were instructed to attend to only one narrator on each trial, with cues
given before they started the trial and through a fixation dot which remained
for the duration of the trial. For details, see the
Details about the experiment section and refer to our paper.
EEG was recorded simultaneously from a 32 channel activate montage (to examine cortical responses) and a 2 channel passive bipolar montage (FCz to earlobes, to examine subcortical responses). On a subset of the subjects (1, 3, 4, 7, 8, 9, 10, 11, 12, 13, 16, 18) an additional electrode was placed on the eardrum. Data are split into cortical (active) electrodes and subcortical (passive) electrodes. Since data was collected simultaneously, data from all electrodes were sampled at 25 kHz. To reduce file size and computation time, the cortical electrodes were downsampled to 1 kHz and the subcortical electrodes were downsampled to 10 kHz.
Experiment 2 - dichotic stimuli (exp2Dichotic)
This “task” contains EEG data for 25 subjects who listened to 60 trials
each (64 s each; total 64 minutes) of two audiobooks - A Wrinkle in Time
(Female narrator) and The Alchemyst (male narrator). Stimuli were set to
65 dB SPL and presented diotically. Subjects sat at a computer desk in a
soundproof room. They were instructed to attend to only one narrator on
each trial (indicated by the story name, talker sex, and direction) with cues
given before they started the trial and through a fixation dot with an arrow
which remained for the duration of the trial. For details, see the
Details about the experiment section and refer to our paper. The records
of individual participant age and sex no longer exist, but overall statistics
are reported in the paper.
EEG was recorded simultaneously from a 32 channel activate montage (to examine cortical responses) and passive electrodes using a bipolar montage, with the noninverting electrode placed on FCz and the inverting electrode on the earlobe, with ground on the forehead. The side the electrode was placed on was counterbalanced across subjects.
Experiment 3 - passive listening to stimuli from Forte et al. (exp3Passive)
This “task” contains EEG data for 14 subjects who listened to 32 trials
each (~117 s each; total ~62 minutes) of four audiobooks - Tales of Troy:
Ulysses the Sacker of Cities and The Green Forest Fairy Book narrated by
James K. White for the male speech and The Children of Odin and The Adventures
of Odysseus and the Tale of Troy narrated by Elizabeth Klett for the female
speech. These audiobooks were selected to match the study by Forte et al. (2017),
who provided us with the audio files. Stimuli were set to 73 dB SPL then
summed together to be presented diotically (i.e., at 76 dB SPL). The stories
were paired in the same manner as in Forte et al. (2017). Subjects sat at a
computer desk in a soundproof room. They were instructed to ignore the audio
as best they could and distract themselves by watching silent captioned videos
of their choosing or by reading. For details, see the Details about the experiment
section and refer to our paper.
EEG was recorded with a passive electrodes using a bipolar montage, with the noninverting electrode placed on FCz and the inverting electrode on the earlobe, with ground on the forehead.
Format
The dataset is formatted according to the EEG Brain Imaging Data Structure.
See the dataset_description.json file for the specific version used.
Generally, you can find detailed event data in the .tsv files and descriptions
in the accompanying .json files. Raw eeg files are provided in the Brain
Products format.
Details about the experiment
For a detailed description of the task, see Stoll et al. (2025) as well as the supplied file json files.
Trigger onset times have already been corrected for the tubing delay of the insert earphones. Trial numbers and more metadata of the events are in each of the ‘*_eeg_events.tsv” file, which is sufficient to know which trial corresponded to which chapter and which narrator the subjects were instructed to attend. As chapters were organized to allow subjects to follow to stories, all subjects had the same trial order in experiment 1 and 2. Story order was randomized in experiment 3, with that information stored in the ‘*_eeg_evnets.tsv” file.
Dataset Information#
Dataset ID |
|
Title |
The auditory brainstem response to natural speech is not affected by selective attention |
Year |
2025 |
Authors |
Thomas J Stoll, Nathan D Vandjelovic, Melissa J Polonenko, Nadja R S Li, Adrian K C Lee, Ross K Maddox |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds006434,
title = {The auditory brainstem response to natural speech is not affected by selective attention},
author = {Thomas J Stoll and Nathan D Vandjelovic and Melissa J Polonenko and Nadja R S Li and Adrian K C Lee and Ross K Maddox},
doi = {10.18112/openneuro.ds006434.v1.2.0},
url = {https://doi.org/10.18112/openneuro.ds006434.v1.2.0},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 66
Recordings: 898
Tasks: 3
Channels: 32 (104), 2 (56), 1 (48), 3 (28)
Sampling rate (Hz): 10000.0 (104), 1000.0 (56), 500.0 (48), 25000.0 (28)
Duration (hours): 0.0
Pathology: Healthy
Modality: Auditory
Type: Attention
Size on disk: 103.0 GB
File count: 898
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds006434.v1.2.0
API Reference#
Use the DS006434 class to access this dataset programmatically.
- class eegdash.dataset.DS006434(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds006434. Modality:eeg; Experiment type:Attention; Subject type:Healthy. Subjects: 66; recordings: 118; tasks: 5.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds006434 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds006434
Examples
>>> from eegdash.dataset import DS006434 >>> dataset = DS006434(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset