DS005345#
Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset
Access recordings and metadata through EEGDash.
Citation: Zhengwu Ma, Nan Wang, Jixing Li (2024). Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset. 10.18112/openneuro.ds005345.v1.0.1
Modality: eeg Subjects: 26 Recordings: 421 License: CC0 Source: openneuro
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS005345
dataset = DS005345(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS005345(cache_dir="./data", subject="01")
Advanced query
dataset = DS005345(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds005345,
title = {Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset},
author = {Zhengwu Ma and Nan Wang and Jixing Li},
doi = {10.18112/openneuro.ds005345.v1.0.1},
url = {https://doi.org/10.18112/openneuro.ds005345.v1.0.1},
}
About This Dataset#
Participants
This dataset includes 25 native Mandarin Chinese speakers (14 females, mean age = 24.04 ± 2.28 years) who participated in both EEG and fMRI experiments. The participants were all right-handed, with no reported history of neurological disorders. They were enrolled in undergraduate or graduate programs in Shanghai. All participants gave informed consent, and the experiments were approved by the Ethics Committee of the Ninth People’s Hospital, affiliated with Shanghai Jiao Tong University School of Medicine (SH9H-2019-T33-2 and SH9H-2022-T379-2).
In the case of French participants, due to legal constraints, additional session considerations were taken into account, such as shorter session durations.
Experiment Procedure
View full README
Participants
This dataset includes 25 native Mandarin Chinese speakers (14 females, mean age = 24.04 ± 2.28 years) who participated in both EEG and fMRI experiments. The participants were all right-handed, with no reported history of neurological disorders. They were enrolled in undergraduate or graduate programs in Shanghai. All participants gave informed consent, and the experiments were approved by the Ethics Committee of the Ninth People’s Hospital, affiliated with Shanghai Jiao Tong University School of Medicine (SH9H-2019-T33-2 and SH9H-2022-T379-2).
In the case of French participants, due to legal constraints, additional session considerations were taken into account, such as shorter session durations.
Experiment Procedure
MRI Scanning Sessions
Participants underwent both EEG and fMRI experiments while listening to the Chinese version of *Le Petit Prince*. During the MRI session, participants were instructed to maintain fixation on a crosshair on the screen and minimize eye movements and head motions. The task involved attending to different talkers in the multitalker condition (single male, single female, mixed male, and mixed female talkers).
Session Breakdown
The entire session lasted approximately 70 minutes for fMRI participants, including a series of 4 conditions (single-talker, mixed-attended, and mixed-unattended conditions).
Quiz questions were administered after each run to assess participants’ comprehension of the narrative.
In the French cohort, due to legal time constraints, the experiment durations were adjusted.
Stimuli
The stimuli were selected excerpts from the Chinese version of Le Petit Prince (available at xiaowangzi.org_). These audio clips were previously used in both EEG (Li et al., 2024) and fMRI (Li et al., 2022) studies.
The English and Chinese versions were enhanced with visual stimuli (e.g., images of scenes from the book) to align with the storyline. However, visual stimuli were not presented in the French version to comply with legal restrictions.
Acquisition
MRI Hardware & Scanning Parameters
EEG: Data were collected using a 64-channel actiCAP system, sampled at 500 Hz, and filtered between 0.016 and 80 Hz.
fMRI: Scanning was performed on a 7.0 T Terra Siemens MRI scanner at the Zhangjiang International Brain Imaging Centre. The scanning parameters differed slightly between the English/Chinese and French studies due to equipment availability.
Functional MRI: 85 interleaved axial slices (1.6×1.6×1.6 mm voxel size, TR = 1000 ms, TE = 22.2 ms)
Anatomical MRI: MP-RAGE sequence, T1-weighted images (voxel size = 0.7×0.7×0.7 mm).
Preprocessing
MRI Data Processing
DICOM to NIfTI Conversion: All raw MRI data were converted to NIfTI format using
dcm2niix(version 1.0.20220505) and processed using thefMRIPreppipeline (version 20.2.0).Anatomical Preprocessing: - Skull stripping - Segmentation into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) - Registration to the Montreal Neurological Institute (MNI) space using MNI152NLin2009cAsym:res-2 template.
Functional Preprocessing: - Motion correction - Slice-timing correction - Multi-echo ICA for denoising - Voxel resampling to native and MNI spaces.
Note: Visual stimuli processing for the English and Chinese conditions was handled separately to avoid potential biases in the analysis.
Dataset Information#
Dataset ID |
|
Title |
Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset |
Year |
2024 |
Authors |
Zhengwu Ma, Nan Wang, Jixing Li |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds005345,
title = {Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset},
author = {Zhengwu Ma and Nan Wang and Jixing Li},
doi = {10.18112/openneuro.ds005345.v1.0.1},
url = {https://doi.org/10.18112/openneuro.ds005345.v1.0.1},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 26
Recordings: 421
Tasks: 2
Channels: 64
Sampling rate (Hz): 500.0
Duration (hours): 0.0
Pathology: Healthy
Modality: Auditory
Type: Attention
Size on disk: 162.5 GB
File count: 421
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds005345.v1.0.1
API Reference#
Use the DS005345 class to access this dataset programmatically.
- class eegdash.dataset.DS005345(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds005345. Modality:eeg; Experiment type:Attention; Subject type:Healthy. Subjects: 26; recordings: 26; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds005345 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds005345
Examples
>>> from eegdash.dataset import DS005345 >>> dataset = DS005345(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset