DS004078#

A synchronized multimodal neuroimaging dataset to study brain language processing

Access recordings and metadata through EEGDash.

Citation: Shaonan Wang, Xiaohan Zhang, Jiajun Zhang, Chengqing Zong (2022). A synchronized multimodal neuroimaging dataset to study brain language processing. 10.18112/openneuro.ds004078.v1.2.1

Modality: meg Subjects: 12 Recordings: 5406 License: CC0 Source: openneuro Citations: 4.0

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS004078

dataset = DS004078(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS004078(cache_dir="./data", subject="01")

Advanced query

dataset = DS004078(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds004078,
  title = {A synchronized multimodal neuroimaging dataset to study brain language processing},
  author = {Shaonan Wang and Xiaohan Zhang and Jiajun Zhang and Chengqing Zong},
  doi = {10.18112/openneuro.ds004078.v1.2.1},
  url = {https://doi.org/10.18112/openneuro.ds004078.v1.2.1},
}

About This Dataset#

Overview

This synchronized multimodal neuroimaging dataset for studying brain language processing (SMN4Lang) contains:

  1. fMRI and MEG data collected on the same 12 participant while they were listening to 6 hours of naturalistic stories;

  2. high-resolution structural (T1, T2), diffusion MRI and resting-state fMRI data for each participant;

  3. rich linguistic annotations for the stimuli, including word frequencies, part-of-speech tags, syntactic tree structures, time-aligned characters and words, various types of word and character embeddings.

View full README

Overview

This synchronized multimodal neuroimaging dataset for studying brain language processing (SMN4Lang) contains:

  1. fMRI and MEG data collected on the same 12 participant while they were listening to 6 hours of naturalistic stories;

  2. high-resolution structural (T1, T2), diffusion MRI and resting-state fMRI data for each participant;

  3. rich linguistic annotations for the stimuli, including word frequencies, part-of-speech tags, syntactic tree structures, time-aligned characters and words, various types of word and character embeddings.

More details about the dataset are described as follows.

Participants

All 12 participants were recruited from universities in Beijing, of which 4 were female, and 8 were male, with an age range 23-30 year. They completed both fMRI and MEG visits (first completed fMRI then MEG experiments which had a gap of 1 month at least), All participants were right-handed adults with Mandarin Chinese as native language who reported having normal hearing and no history of neurological disorders. They were paid and gave written informed consent. The study was conducted under the approval of the Institutional Review Board of Peking University.

Experimental Procedures

Before each scanning, participants first completed a simple information survey form and an informed consent. During both fMRI and MEG scanning, participants were instructed to listen and pay attention to the story stimulus, remain still, answer questions on the screen after each audio was finished. Stimulus presentation was implemented using Psychtoolbox-3. Specifically, at the beginning of each run, there was instruction of “Waiting for the scanning” on the screen followed with 8 seconds blank. Then, the instruction became “This audio is about to start, please listen carefully” which lasted for 2.65 seconds before playing the audio; during audio play, a centrally located fixation cross was presented; finally, two questions about the story were presented each with four answers to choose from during which time was controlled by participants. Auditory story stimuli were delivered via S14 insert earphones for fMRI studies (with headphones or foam padding were placed over the earphones to reduce scanner noise) and Elekta matching insert earphones for MEG studies.

The fMRI recording was split into 7 visits with each lasting 1.5 hours in which the T1, T2, resting MRI were collected on the first visit, fMRI with listening tasks was collected from 1 to 6 visits, and the diffusion MRI were collected on the last visit. During MRI scanning including T1, T2, diffusion and resting, participants were instructed to lie relaxed and still in the machine. The MEG recording was split into 6 visits with each lasting 1.5 hours.

Stimuli

Stimuli are 60 story audios with 4 to 7 minutes long, comprising various topics such as education and culture. All audios were downloaded from Renmin Daily Review website read by the same male broadcaster. The corresponding texts were also downloaded from the Renmin Daily Review website in which errors were manually corrected to make sure audio and texts are aligned.

Annotations

Rich annotations of audios and texts are provided in the derivatives/annotations folder, including:

  1. Speech to text alignment: The onset and offset time of each character and words in the audio are provided in the “stimuli/time_align” folder. Note that the onset and offset time were added by 10.65 seconds to align with the time of fMRI images because the fMRI scan was started 10.65 seconds before playing the audio.

  2. Frequency: Character and word frequencies in the “stimuli/frequency” folder were calculated from the Xinhua news corpus and then log-transformed.

  3. Textual embeddings: Text embeddings computed by different pre-trained language models (including Word2Vec, BERT, and GPT2) are provided in the “stimuli/embeddings” folder. Both the character-level and word-level embeddings computed by Word2Vec and BERT model and the word-level embeddings computed by GPT2 model are provided.

  4. Syntactic annotations: The POS tag of each word, the constituent tree structure, and the dependency tree structure are provided in the “stimuli/syntactic_annotations” folder. The POS tags were annotated by experts following criterion of Peking Chinese Treebank. The constituent tree structure was manually annotated by linguistic students following PKU Chinese Treebank criterion with the TreeEditor tools and all results were double checked by different experts. The dependency tree structure was transformed from the constituent tree using Stanford CoreNLP tools.

Preprocessing

The MRI data, including the structural, functional, resting and diffusion images, were preprocessed using the “minimal preprocessing pipelines (HCP)” .

The MEG data was first preprocessed using the temporal Signal Space Separation (tSSS) method and the bad channels were excluded. And then the independent component analysis (ICA) method was applied to remove the ocular artefacts using the MNE software.

Usage Notes

For the MEG data of sub-08_run-16 and sub-09_run-7, the stimuli-starting triggers were not recorded due to technical problems. The first trigger in these two runs were the stimuli-ending triggers and the starting time can be computed by subtracting the stimuli duration from the time point of the first trigger.

Dataset Information#

Dataset ID

DS004078

Title

A synchronized multimodal neuroimaging dataset to study brain language processing

Year

2022

Authors

Shaonan Wang, Xiaohan Zhang, Jiajun Zhang, Chengqing Zong

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds004078.v1.2.1

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds004078,
  title = {A synchronized multimodal neuroimaging dataset to study brain language processing},
  author = {Shaonan Wang and Xiaohan Zhang and Jiajun Zhang and Chengqing Zong},
  doi = {10.18112/openneuro.ds004078.v1.2.1},
  url = {https://doi.org/10.18112/openneuro.ds004078.v1.2.1},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 12

  • Recordings: 5406

  • Tasks: 2

Channels & sampling rate
  • Channels: 328 (720), 306 (720)

  • Sampling rate (Hz): 1000.0

  • Duration (hours): 0.0

Tags
  • Pathology: Healthy

  • Modality: Auditory

  • Type: Other

Files & format
  • Size on disk: 631.1 GB

  • File count: 5406

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds004078.v1.2.1

Provenance

API Reference#

Use the DS004078 class to access this dataset programmatically.

class eegdash.dataset.DS004078(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds004078. Modality: meg; Experiment type: Other; Subject type: Healthy. Subjects: 12; recordings: 720; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds004078 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds004078

Examples

>>> from eegdash.dataset import DS004078
>>> dataset = DS004078(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#