DS006334#
Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories
Access recordings and metadata through EEGDash.
Citation: Biau E, Wang D, Park H, Jensen O, Hanslmayr S (2025). Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories. 10.18112/openneuro.ds006334.v1.0.0
Modality: meg Subjects: 30 Recordings: 128 License: CC0 Source: openneuro
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS006334
dataset = DS006334(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS006334(cache_dir="./data", subject="01")
Advanced query
dataset = DS006334(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds006334,
title = {Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories},
author = {Biau E and Wang D and Park H and Jensen O and Hanslmayr S},
doi = {10.18112/openneuro.ds006334.v1.0.0},
url = {https://doi.org/10.18112/openneuro.ds006334.v1.0.0},
}
About This Dataset#
General information: This repository contains the raw MEG data, T1-weighted anatomical scans, the corresponding behavioural logfiles, as well as the scripts to perform analyses and results reported in the manuscript: Biau, E., Wang, D., Park, H., Jensen, O., & Hanslmayr, S. (2025). Neocortical and hippocampal theta oscillations track audiovisual integration and replay of speech memories. Journal of Neuroscience, 45(21). Task overview: The experimental paradigm consisted of repeated blocks, with each block being composed of three successive tasks: encoding, distractor, and retrieval task. 1) Encoding: participants were presented with a series of audiovisual speech movies and performed an audiovisual synchrony detection. Each trial started with a brief fixation cross (jittered duration, 1,000–1,500 ms) followed by the presentation of a random synchronous or asynchronous audiovisual speech movie (5 s). After the movie end, participants had to determine whether video and sound were presented in synchrony or asynchrony in the movie, by pressing the index finger (synchronous) or the middle finger (asynchronous) button of the response device as fast and accurate as possible. The next trial started after the participant’s response. After the encoding, the participants did a short distractor task. Each trial started with a brief fixation cross (jittered duration, 1,000–1,500 ms) followed by the presentation of a random number (from 1 to 99) displayed at the center of the screen. 2) Distractor: Participants were instructed to determine as fast and accurate as possible whether this number was odd or even by pressing the index (odd) or the middle finger (even) button of the response device. Each distractor task contained 20 trials. The purpose of the distractor task was only to clear memory up. After the distractor task, the participants performed the retrieval task to assess their memory. Each trial started with a brief fixation cross (jittered duration, 1,000–1,500 ms) followed by the presentation of a static frame depicting the face of a speaker from a movie attended in the previous encoding. 3) Retrieval: During this visual cueing (5 s), participants were instructed to recall as accurately as possible every auditory information previously associated with the speaker’s speech during the movie presentation. At the end of the visual cueing, participants were provided the possibility to listen two auditory speech stimuli: one stimulus corresponded to the speaker’s auditory speech from the same movie (i.e., matching). The other auditory stimulus was taken from another random movie with the same speaker gender (i.e., unmatching). Participants chose to listen each stimulus sequentially by pressing the index finger (Speech 1) or the middle finger (Speech 2) button of the response device. The order of displaying was free, but for every trial, participants were allowed to listen to each auditory stimulus only one time to avoid speech restudy. At the end of the second auditory stimulus, participants were instructed to determine as fast and accurate as possible which auditory speech stimulus corresponded to the speaker’s face frame, by pressing the index finger (Speech1) or the middle finger (Speech2) button of the response device. The next retrieval trial started after the participant’s response.
After the last trial of the retrieval, participants took a short break, before starting a new block (encoding–distractor–retrieval). Events and corresponding trigger values in .fif raw MEG data: Each participant underwent only one session. Run1to5 are simply the chunks of the continuous MEG recording during the unique session, and were split automatically by the software. Audiovisual movie onset [1]; Visual cue onset [2]; Speech 1 onset [4]; Speech 2 onset [8]; Probe response key press [16]; Movie Localiser onset [32] and Sound Localiser onset [64]. Some data have their associated individual T1w anatomy scans, other do not.
Dataset Information#
Dataset ID |
|
Title |
Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories |
Year |
2025 |
Authors |
Biau E, Wang D, Park H, Jensen O, Hanslmayr S |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds006334,
title = {Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories},
author = {Biau E and Wang D and Park H and Jensen O and Hanslmayr S},
doi = {10.18112/openneuro.ds006334.v1.0.0},
url = {https://doi.org/10.18112/openneuro.ds006334.v1.0.0},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 30
Recordings: 128
Tasks: 1
Channels: 331 (74), 332 (54)
Sampling rate (Hz): 1000.0
Duration (hours): 0.0
Pathology: Not specified
Modality: Multisensory
Type: Memory
Size on disk: 166.2 GB
File count: 128
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds006334.v1.0.0
API Reference#
Use the DS006334 class to access this dataset programmatically.
- class eegdash.dataset.DS006334(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds006334. Modality:meg; Experiment type:Memory; Subject type:Unknown. Subjects: 30; recordings: 128; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds006334 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds006334
Examples
>>> from eegdash.dataset import DS006334 >>> dataset = DS006334(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset