DS005340#

Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech

Access recordings and metadata through EEGDash.

Citation: Melissa J. Polonenko, Ross K. Maddox (2024). Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech. 10.18112/openneuro.ds005340.v1.0.4

Modality: eeg Subjects: 15 Recordings: 112 License: CC0 Source: openneuro Citations: 1.0

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS005340

dataset = DS005340(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS005340(cache_dir="./data", subject="01")

Advanced query

dataset = DS005340(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds005340,
  title = {Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech},
  author = {Melissa J. Polonenko and Ross K. Maddox},
  doi = {10.18112/openneuro.ds005340.v1.0.4},
  url = {https://doi.org/10.18112/openneuro.ds005340.v1.0.4},
}

About This Dataset#

README

Details related to access to the data

Please contact the following authors for further information:

Melissa Polonenko(email: mpolonen@umn.edu) Ross Maddox (email: rkmaddox@med.umich.edu)

View full README

README

Details related to access to the data

Please contact the following authors for further information:

Melissa Polonenko(email: mpolonen@umn.edu) Ross Maddox (email: rkmaddox@med.umich.edu)

Overview

This is the “peaky_pitchshift”” dataset for the paper Polonenko MJ & Maddox RK (2024), with citation listed below. Peer-reviewed manuscript: Melissa J. Polonenko, Ross K. Maddox; Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech. JASA Express Lett. 1 November 2024; 4 (11): 114401. https://doi.org/10.1121/10.0034329 BioRxiv pre-print: Melissa Jane Polonenko, Ross K Maddox (2024). Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech. bioRxiv 2024.07.12.603125; doi: https://doi.org/10.1101/2024.07.12.603125 Auditory brainstem responses (ABRs) were derived to continuous peaky speech from two talkers with different fundamental frequencies (f0s) and from clicks that have mean stimulus rates set to the mean f0s. Data was collected from May to June 2021. Aims:

  1. replicate the male/female talker effect with each at their natural f0

  2. systematically determine if f0 is the main driver of this talker difference

  3. evaluate if the f0 effect resembles the click rate effect

The details of the experiment can be found at Polonenko & Maddox (2024). Stimuli:

1) randomized click trains at 3 stimulus rates (123, 150, 183 Hz), 30 x 10 s trials each for a total of 90 trials (15 min, 5 min each rate) 2) peaky speech for a male and female narrator at 3 f0s (123, 150, 183 Hz), 120 x 10 s trials each of the 6 narrator-f0 combo for a total of 720 trials (2 hours, 20 min each) NOTE: f0s used: original f0s (low & high respectively) and f0s shifted to the other narrator’s f0 and an f0 at the midpoint between the f0s. click rates used: set to the mean f0s used for the speech

The code for stimulus preprocessing and EEG analysis is available on Github:

polonenkolab/peaky_pitchshift

Format

The dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from participant 01 to 15 in raw brainvision format (3 files: .eeg, .vhdr, .vmrk) and stimuli files in format of .hdf5. The stimuli files contain the audio (‘x’), and regressors for the deconvolution (‘pinds’ are the pulse indices, ‘anm’ is an auditory nerve model regressor,

which was used during analyses but was not included as part of the article).

Generally, you can find detailed event data in the .tsv files and descriptions in the accompanying .json files. Raw eeg files are provided in the Brain Products format.

Participants

15 participants, mean ± SD age of 24.1 ± 6.1 years (19-35 years) Inclusion criteria:

  1. Age between 18-40 years

  2. Normal hearing: audiometric thresholds 20 dB HL or better from 500 to 8000 Hz

  3. Speak English as their primary language

Please see participants.tsv for more information.

Apparatus

Participants sat in a darkened sound-isolating booth and rested or watched silent videos with closed captioning. Stimuli were presented at an average level of 65 dB SPL and a sampling rate of 48 kHz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. Custom python scripts using expyfun were used to control the experiment and stimulus presentation.

Details about the experiment

For a detailed description of the task, see Polonenko & Maddox (2024) and the supplied task-peaky_pitch_eeg.json file. The 6 peaky speech conditions (2 narrators x 3 f0s) were randomly interleaved for each block of trials (i.e., for trial 1, the 6 conditions were randomized) and the story token was randomized. This means that the participant would not be able to follow the story. For clicks the trials were not randomized (already random clicks). Trigger onset times in the tsv files have already been corrected for the tubing delay of the insert earphones (but not in the events of the raw files). Triggers with values of “1” were recorded to the onset of the 10 s audio, and shortly after triggers with values of “4” or “8” were stamped to indicate the overall trial number out of 120 for each speech conditon and out of 30 for each click condition. This was done by converting the decimal trial number to bits, denoted b, then calculating 2 ** (b + 2). We’ve specified these trial numbers and more metadata of the events in each of the ‘*_eeg_events.tsv” file, which is sufficient to know which trial corresponded to which type of stimulus (clicks, male narrator, female narrator), which f0 (low, mid, high), and which file - e.g., male_low_000_regress.hdf5 for the male narrator with the low f0.

Dataset Information#

Dataset ID

DS005340

Title

Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech

Year

2024

Authors

Melissa J. Polonenko, Ross K. Maddox

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds005340.v1.0.4

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds005340,
  title = {Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech},
  author = {Melissa J. Polonenko and Ross K. Maddox},
  doi = {10.18112/openneuro.ds005340.v1.0.4},
  url = {https://doi.org/10.18112/openneuro.ds005340.v1.0.4},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 15

  • Recordings: 112

  • Tasks: 1

Channels & sampling rate
  • Channels: 2

  • Sampling rate (Hz): 10000.0

  • Duration (hours): 0.0

Tags
  • Pathology: Healthy

  • Modality: Auditory

  • Type: Perception

Files & format
  • Size on disk: 9.5 GB

  • File count: 112

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds005340.v1.0.4

Provenance

API Reference#

Use the DS005340 class to access this dataset programmatically.

class eegdash.dataset.DS005340(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds005340. Modality: eeg; Experiment type: Perception; Subject type: Healthy. Subjects: 15; recordings: 15; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds005340 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds005340

Examples

>>> from eegdash.dataset import DS005340
>>> dataset = DS005340(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#