DS005408#

The effect of speech masking on the subcortical response to speech

Access recordings and metadata through EEGDash.

Citation: Melissa J. Polonenko, Ross K. Maddox (2024). The effect of speech masking on the subcortical response to speech. 10.18112/openneuro.ds005408.v1.0.0

Modality: eeg Subjects: 25 Recordings: 206 License: CC0 Source: openneuro

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS005408

dataset = DS005408(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS005408(cache_dir="./data", subject="01")

Advanced query

dataset = DS005408(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds005408,
  title = {The effect of speech masking on the subcortical response to speech},
  author = {Melissa J. Polonenko and Ross K. Maddox},
  doi = {10.18112/openneuro.ds005408.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds005408.v1.0.0},
}

About This Dataset#

README

Details related to access to the data

Please contact the following authors for further information:

Melissa Polonenko(email: mpolonen@umn.edu) Ross Maddox (email: rkmaddox@med.umich.edu)

View full README

README

Details related to access to the data

Please contact the following authors for further information:

Melissa Polonenko(email: mpolonen@umn.edu) Ross Maddox (email: rkmaddox@med.umich.edu)

Overview

This is the “peaky_snr” dataset for the paper Polonenko MJ & Maddox RK (2024), with citation listed below.

BioRxiv: The effect of speech masking on the subcortical response to speech

Auditory brainstem responses (ABRs) were derived to continuous peaky speech from between one up to five simultaneously presented talkers and from clicks. Data was collected from June to July 2021.

Goal: To better understand masking’s effects on the subcortical neural encoding of naturally uttered speech in human listeners.

To do this we leveraged our recently developed method for determining the auditory brainstem response (ABR) to speech (Polonenko and Maddox, 2021). Whereas our previous work was aimed at encoding of single talkers, here we determined the ABR to speech in quiet as well as in the presence of varying numbers of other talkers.

The details of the experiment can be found at Polonenko & Maddox (2024).

Stimuli:

1) randomized click trains at an average rate of 40 Hz, 60 x 10 s trials for a total of 10 minutes; 2) peaky speech for up to 5 male narrators. 30 minutes of each SNR (clean, 0 dB, -3 dB, -6 dB), corresponding to 1, 2, 3, and 5 talkers presented simultaneously, each set to 65 dB.

NOTE: files for each story were completely randomized. Random combinations were created so that each story was equally represented in the data.

The code for stimulus preprocessing and EEG analysis is available on Github:

polonenkolab/peaky_snr

Format

The dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from participant 01 to 25 in raw brainvision format (3 files: .eeg, .vhdr, .vmrk) and stimuli files in format of .hdf5. The stimuli files contain the audio (‘audio’), and regressors for the deconvolution (‘pinds’ are the pulse indices, ‘anm’ is an auditory nerve model regressor,

which was used during analyses but was not included as part of the article).

Generally, you can find detailed event data in the .tsv files and descriptions in the accompanying .json files. Raw eeg files are provided in the Brain Products format.

Participants

25 participants, mean ± SD age of 23.4 ± 5.5 years (19-37 years)

Inclusion criteria:
  1. Age between 18-40 years

  2. Normal hearing: audiometric thresholds 20 dB HL or better from 500 to 8000 Hz

  3. Speak English as their primary language

Please see participants.tsv for more information.

Apparatus

Participants sat in a darkened sound-isolating booth and rested or watched silent videos with closed captioning. Stimuli were presented at an average level of 65 dB SPL (per story; total for 5 talkers = 71 dB) and a sampling rate of 48 kHz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. Custom python scripts using expyfun were used to control the experiment and stimulus presentation.

Details about the experiment

For a detailed description of the task, see Polonenko & Maddox (2024) and the supplied task-peaky_snr_eeg.json file. The 4 SNR speech conditions and the story tokens were randomized. This means that the participant would not be able to follow the stories. For clicks the trials were not randomized (already random clicks).

Trigger onset times in the tsv files have already been corrected for the tubing delay of the insert earphones (but not in the events of the raw files). Triggers with values of “1” were recorded to the onset of the 10 s audio, and shortly after triggers with values of “4” or “8” were stamped to indicate info about the trial. This was done by converting the decimal trial number to bits, denoted b, then calculating 2 ** (b + 2). We’ve specified these trial triggers and more metadata of the events in each of the ‘*_eeg_events.tsv” file, which is sufficient to know which trial corresponded to which type of stimulus (clicks or speech), snr, and which files of which stories were presented. e.g., alice_000_peaky_diotic_regress.hdf5 for the first file of the story called ‘alice’ (Alice in Wonderland).

Dataset Information#

Dataset ID

DS005408

Title

The effect of speech masking on the subcortical response to speech

Year

2024

Authors

Melissa J. Polonenko, Ross K. Maddox

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds005408.v1.0.0

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds005408,
  title = {The effect of speech masking on the subcortical response to speech},
  author = {Melissa J. Polonenko and Ross K. Maddox},
  doi = {10.18112/openneuro.ds005408.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds005408.v1.0.0},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 25

  • Recordings: 206

  • Tasks: 1

Channels & sampling rate
  • Channels: 2

  • Sampling rate (Hz): 10000.0

  • Duration (hours): 0.0

Tags
  • Pathology: Healthy

  • Modality: Auditory

  • Type: Perception

Files & format
  • Size on disk: 15.3 GB

  • File count: 206

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds005408.v1.0.0

Provenance

API Reference#

Use the DS005408 class to access this dataset programmatically.

class eegdash.dataset.DS005408(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds005408. Modality: eeg; Experiment type: Perception; Subject type: Healthy. Subjects: 25; recordings: 29; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds005408 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds005408

Examples

>>> from eegdash.dataset import DS005408
>>> dataset = DS005408(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#