DS003633#
ForrestGump-MEG
Access recordings and metadata through EEGDash.
Citation: Xingyu Liu, Yuxuan Dai, Hailun Xie, Zonglei Zhen (2021). ForrestGump-MEG. 10.18112/openneuro.ds003633.v1.0.4
Modality: meg Subjects: 11 Recordings: 2298 License: CC0 Source: openneuro Citations: 1.0
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS003633
dataset = DS003633(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS003633(cache_dir="./data", subject="01")
Advanced query
dataset = DS003633(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds003633,
title = {ForrestGump-MEG},
author = {Xingyu Liu and Yuxuan Dai and Hailun Xie and Zonglei Zhen},
doi = {10.18112/openneuro.ds003633.v1.0.4},
url = {https://doi.org/10.18112/openneuro.ds003633.v1.0.4},
}
About This Dataset#
ForrestGump-MEG: A audio-visual movie watching MEG dataset
For details please refer to our paper on https://www.biorxiv.org/content/10.1101/2021.06.04.446837v1.
This dataset contains MEG data recorded from 11 subjects while watching the 2h long Chinese-dubbed audio-visual movie ‘Forrest Gump’. The data were acquired with a 275-channel CTF MEG. Auxiliary data (T1w) as well as derivation data such as preprocessed data and MEG-MRI co-registration are also included.
Please noted that For sub-01, the MEG machine collapsed at the 500th second in run 7 (segment 7), a supplementary run (run 8, segment 7, 200 s) was recorded subsequently after the collapse. So there are 9 runs in total for sub-01 with run 7&8 for segment 7 and run 9 for segment 8.
Pre-process procedure description
View full README
ForrestGump-MEG: A audio-visual movie watching MEG dataset
For details please refer to our paper on https://www.biorxiv.org/content/10.1101/2021.06.04.446837v1.
This dataset contains MEG data recorded from 11 subjects while watching the 2h long Chinese-dubbed audio-visual movie ‘Forrest Gump’. The data were acquired with a 275-channel CTF MEG. Auxiliary data (T1w) as well as derivation data such as preprocessed data and MEG-MRI co-registration are also included.
Please noted that For sub-01, the MEG machine collapsed at the 500th second in run 7 (segment 7), a supplementary run (run 8, segment 7, 200 s) was recorded subsequently after the collapse. So there are 9 runs in total for sub-01 with run 7&8 for segment 7 and run 9 for segment 8.
Pre-process procedure description
The T1w images stored as NIFTI files were minimally-preprocessed using the anatomical preprocessing pipeline from fMRIPrep with default settings.
MEG data were pre-processed using MNE following a three-step procedure: 1. bad channels were detected and removed. 2. a high-pass filter of 1 Hz was applied to remove possible slow drifts from the continuous MEG data. 3. artifacts removal was performed with ICA.
Stimulus material
The audio-visual stimulus materials were from the Chinese-dubbed ‘Forrest Gump’ DVD released in 2013 (ISBN: 978-7-7991-3934-0), which cannot be publicly released due to copyright restrictions. The stimulus materials are available upon reasonable request and on condition of a research-only data use agreement (correspondence with Xingyu Liu, liuxingyu987@foxmail.com).
Dataset content overview
the data were organized following the MEG-BIDS using MNE-BIDS toolbox.
the pre-processed MEG data
The preprocessed MEG recordings including the preprocessed MEG data, the event files, the ICA decomposition and label files and the MEG-MRI coordinate transformation file are hosted here.
|---./derivatives/preproc_meg-mne_mri-fmriprep/sub-xx/ses-movie/meg/
|---sub-xx_ses-movie_coordsystem.json
|---sub-xx_ses-movie_task-movie_run-xx_channels.tsv
|---sub-xx_ses-movie_task-movie_run-xx_decomposition.tsv
|---sub-xx_ses-movie_task-movie_run-xx_events.tsv
|---sub-xx_ses-movie_task-movie_run-xx_ica.fif.gz
|---sub-xx_ses-movie_task-movie_run-xx_meg.fif
|---sub-xx_ses-movie_task-movie_run-xx_meg.json
|---...
|---sub-xx_ses-movie_task-movie_trans.fif
the pre-processed MRI data
The preprocessed MRI volume, reconstructed surface, and other associations including transformation files are hosted here
|---./derivatives/preproc_meg-mne_mri-fmriprep/sub-xx/ses-movie/anat/
|---sub-xx_ses-movie_desc-preproc_T1w.nii.gz
|---sub-xx_ses-movie_hemi-L_inflated.surf.gii
|---sub-xx_ses-movie_hemi-L_midthickness.surf.gii
|---sub-xx_ses-movie_hemi-L_pial.surf.gii
|---sub-xx_ses-movie_hemi-L_smoothwm.surf.gii
|---sub-xx_ses-movie_hemi-R_inflated.surf.gii
|---sub-xx_ses-movie_hemi-R_midthickness.surf.gii
|---sub-xx_ses-movie_hemi-R_pial.surf.gii
|---sub-xx_ses-movie_hemi-R_smoothwm.surf.gii
|---sub-xx_ses-movie_space-MNI152NLin2009cAsym_desc-preproc_T1w.nii.gz
|---sub-xx_ses-movie_space-MNI152NLin6Asym_desc-preproc_T1w.nii.gz
|---...
the FreeSurfer surface data, the high-resolution head surface and the MRI-fiducials are provided here
|---./derivatives/preproc_meg-mne_mri-fmriprep/sourcedata/
|---freesurfer
|---sub-xx
|---...
the raw data
|---./sub-xx/ses-movie/
|---meg/
| |---sub-xx_ses-movie_coordsystem.json
| |---sub-xx_ses-movie_task-movie_run-xx_channels.tsv
| |---sub-xx_ses-movie_task-movie_run-xx_events.tsv
| |---sub-xx_ses-movie_task-movie_run-xx_meg.ds
| |---sub-xx_ses-movie_task-movie_run-xx_meg.json
| |---...
|---anat/
|---sub-xx_ses-movie_T1w.json
|---sub-xx_ses-movie_T1w.nii.gz
Dataset Information#
Dataset ID |
|
Title |
ForrestGump-MEG |
Year |
2021 |
Authors |
Xingyu Liu, Yuxuan Dai, Hailun Xie, Zonglei Zhen |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds003633,
title = {ForrestGump-MEG},
author = {Xingyu Liu and Yuxuan Dai and Hailun Xie and Zonglei Zhen},
doi = {10.18112/openneuro.ds003633.v1.0.4},
url = {https://doi.org/10.18112/openneuro.ds003633.v1.0.4},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 11
Recordings: 2298
Tasks: 2
Channels: 272 (96), 409 (89), 378 (7)
Sampling rate (Hz): 600.0 (178), 1200.0 (14)
Duration (hours): 0.0
Pathology: Healthy
Modality: Multisensory
Type: Perception
Size on disk: 73.5 GB
File count: 2298
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds003633.v1.0.4
API Reference#
Use the DS003633 class to access this dataset programmatically.
- class eegdash.dataset.DS003633(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds003633. Modality:meg; Experiment type:Perception; Subject type:Healthy. Subjects: 12; recordings: 96; tasks: 2.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds003633 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds003633
Examples
>>> from eegdash.dataset import DS003633 >>> dataset = DS003633(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset