DS007523: meg dataset, 58 subjects#
LPP MEG Listen
Access recordings and metadata through EEGDash.
Citation: Corentin Bel, Julie Bonnaire, Christophe Pallier, Jean-Rémi King (2026). LPP MEG Listen. 10.18112/openneuro.ds007523.v1.0.0
Modality: meg Subjects: 58 Recordings: 579 License: CC0 Source: openneuro
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS007523
dataset = DS007523(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS007523(cache_dir="./data", subject="01")
Advanced query
dataset = DS007523(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds007523,
title = {LPP MEG Listen},
author = {Corentin Bel and Julie Bonnaire and Christophe Pallier and Jean-Rémi King},
doi = {10.18112/openneuro.ds007523.v1.0.0},
url = {https://doi.org/10.18112/openneuro.ds007523.v1.0.0},
}
About This Dataset#
Summary
This dataset contains magnetoencephalography (MEG) recordings collected while participants listened to the French audiobook of Le Petit Prince by Antoine de Saint-Exupéry. A complementary MEG dataset from the same project, using a reading (RSVP) paradigm, is available on OpenNeuro (accession number: ds007524). This data is analyzed in: d’Ascoli, S., Bel, C., Rapin, J. et al. Towards decoding individual words from non-invasive brain recordings. Nature Communications 16, 10521 (2025). https://doi.org/10.1038/s41467-025-65499-0
View full README
Summary
This dataset contains magnetoencephalography (MEG) recordings collected while participants listened to the French audiobook of Le Petit Prince by Antoine de Saint-Exupéry. A complementary MEG dataset from the same project, using a reading (RSVP) paradigm, is available on OpenNeuro (accession number: ds007524). This data is analyzed in: d’Ascoli, S., Bel, C., Rapin, J. et al. Towards decoding individual words from non-invasive brain recordings. Nature Communications 16, 10521 (2025). https://doi.org/10.1038/s41467-025-65499-0
Participants
Fifty-eight healthy adults participated in the listening experiment (17 females; mean age = 27.8 years, SD = 5.5 years). All participants were native French speakers, right-handed, and reported no history of neurological disorders. Written informed consent was obtained prior to participation. The study was approved by the relevant
local ethics committee.
Stimuli
The auditory stimulus consisted of the French audiobook version of *Le Petit Prince*. - Language: French - Format: Continuous audiobook - Segmentation: 9 parts - Mean duration per part: 10min50s - Standard deviation: 55s - Minimum duration: 9min40s - Maximum duration: 12min30s
The same audiobook version was previously used in a publicly available
fMRI dataset (Li et al., 2022).
Experimental Procedure
Participants were seated in the MEG system after informed consent and familiarization with the recording environment. Auditory stimuli were delivered through MEG-compatible earphones. Sound intensity was individually adjusted to a comfortable listening level before the experiment. Participants were instructed to listen attentively and remain as still as possible. The experiment consisted of 9 runs, corresponding to the 9 audiobook segments. Between runs, participants completed 4 multiple-choice comprehension questions presented visually on a screen (not reported here). Short breaks were provided between runs. Alertness and movement were monitored
via camera during recording.
Acquisition
MEG
MEG data for all three tasks were recorded inside the same magnetically shielded room using a whole-head Elekta Neuromag TRIUX MEG system (Elekta Oy, Helsinki, Finland), equipped with 102 magnetometers and 204 planar gradiometers. Data were recorded continuously with a sampling rate of 1000 Hz and an online low-pass filter at 330 Hz and high-pass filter at 0.1 Hz. Vertical and horizontal electrooculograms (EOG) and an electrocardiogram (ECG) were recorded simultaneously using bipolar electrodes to monitor eye movements and heartbeats.
Anatomical MRI
For each participant, a high-resolution T1-weighted anatomical MRI scan was acquired using a 3T Siemens Magnetom Prisma MRI scanner (Siemens Healthcare, Erlangen, Germany). A standard MPRAGE sequence was used. MRI scans were typically acquired right after the MEG recording. Scans were used for coregistration and cortical surface reconstruction for source analysis.
Data Organization
Raw Data
The root directory includes:
- dataset_description.json
- participants.tsv and participants.json
- task-listen_events.json
- sub-01 to sub-58
- sourcedata/
Each subject directory (sub-XX) contains one session (ses-01) with:
- anat/: T1-weighted MRI (sub-XX_ses-01_T1w.nii.gz) and
corresponding JSON sidecar
meg/: 9 MEG runs (task-listen_run-01torun-09), each including: - continuous MEG data (*_meg.fif) - sidecar JSON files -events.tsvandchannels.tsvfiles - coordinate system file (*_coordsystem.json) - calibration and crosstalk filessub-XX_ses-01_scans.tsv: scan-level metadata
Each run corresponds to one audiobook segment. Acquisition parameters are provided in the corresponding sidecar JSON
files.
References
Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., & Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. https://doi.org/10.1038/sdata.2018.110 Li, Jixing, et al. “Le Petit Prince Multilingual Naturalistic fMRI Corpus.” Scientific Data, vol. 9, no. 1, Aug. 2022, p. 530. www.nature.com, https://doi.org/10.1038/s41597-022-01625-7.
Dataset Information#
Dataset ID |
|
Title |
LPP MEG Listen |
Author (year) |
|
Canonical |
|
Importable as |
|
Year |
2026 |
Authors |
Corentin Bel, Julie Bonnaire, Christophe Pallier, Jean-Rémi King |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds007523,
title = {LPP MEG Listen},
author = {Corentin Bel and Julie Bonnaire and Christophe Pallier and Jean-Rémi King},
doi = {10.18112/openneuro.ds007523.v1.0.0},
url = {https://doi.org/10.18112/openneuro.ds007523.v1.0.0},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 58
Recordings: 579
Tasks: 1
Channels: 346 (484), 404 (9), 400 (9), 329 (9), 343 (9), 321
Sampling rate (Hz): 1000.0
Duration (hours): 94.80763305555556
Pathology: Healthy
Modality: Auditory
Type: Perception
Size on disk: 444.8 GB
File count: 579
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds007523.v1.0.0
API Reference#
Use the DS007523 class to access this dataset programmatically.
- class eegdash.dataset.DS007523(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetLPP MEG Listen
- Study:
ds007523(OpenNeuro)- Author (year):
Bel2026- Canonical:
Dascoli2025
Also importable as:
DS007523,Bel2026,Dascoli2025.Modality:
meg; Experiment type:Perception; Subject type:Healthy. Subjects: 58; recordings: 579; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds007523 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds007523 DOI: https://doi.org/10.18112/openneuro.ds007523.v1.0.0
Examples
>>> from eegdash.dataset import DS007523 >>> dataset = DS007523(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset