DS003392#
NeuroSpin hMT+ Localizer DATA (MEG & aMRI)
Access recordings and metadata through EEGDash.
Citation: Nicolas Zilber, Philippe Ciuciu, Alexandre Gramfort, Leila Azizi, Virginie van Wassenhove (2020). NeuroSpin hMT+ Localizer DATA (MEG & aMRI). 10.18112/openneuro.ds003392.v1.0.4
Modality: meg Subjects: 11 Recordings: 159 License: CC0 Source: openneuro Citations: 0.0
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS003392
dataset = DS003392(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS003392(cache_dir="./data", subject="01")
Advanced query
dataset = DS003392(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds003392,
title = {NeuroSpin hMT+ Localizer DATA (MEG & aMRI)},
author = {Nicolas Zilber and Philippe Ciuciu and Alexandre Gramfort and Leila Azizi and Virginie van Wassenhove},
doi = {10.18112/openneuro.ds003392.v1.0.4},
url = {https://doi.org/10.18112/openneuro.ds003392.v1.0.4},
}
About This Dataset#
Dataset description: Magnetoencephalography (MEG) dataset recorded during a hMT+ (human visual motion area) localizer task
Published in: Zilber, N., Ciuciu, P., Gramfort, A., Azizi, L., & Van Wassenhove, V. (2014). Supramodal processing optimizes visual perceptual learning and plasticity. Neuroimage, 93, 32-46.
Data curation: Sophie Herbst, Alexandre Gramfort
This MEG dataset was prepared in the Brain Imaging Data Structure (MEG-BIDS, Niso et al. 2018) format using MNE-BIDS (Appelhoff et al. 2019).
The dataset contains 10 of the 12 participants from the vision-only training group.
View full README
Dataset description: Magnetoencephalography (MEG) dataset recorded during a hMT+ (human visual motion area) localizer task
Published in: Zilber, N., Ciuciu, P., Gramfort, A., Azizi, L., & Van Wassenhove, V. (2014). Supramodal processing optimizes visual perceptual learning and plasticity. Neuroimage, 93, 32-46.
Data curation: Sophie Herbst, Alexandre Gramfort
This MEG dataset was prepared in the Brain Imaging Data Structure (MEG-BIDS, Niso et al. 2018) format using MNE-BIDS (Appelhoff et al. 2019).
The dataset contains 10 of the 12 participants from the vision-only training group.
Two participants were removed, one due to problems with the trigger channel, and one due to different settings in the acquisition preventing us from processing the dataset without prior adjustment.
EXPERIMENT
Participants were presented with a cloud of moving dots, always starting with incoherent movement (up or down result in equal display, due to the incoherence). After 500 ms, the movement became coherent in 50% of the trials (95% coherence, up or down) and remained incoherent in the other 50%, lasting for 1000 ms. Participants were instructed to passively view the stimuli for a total of 120 trials.
Events:
1: coherent / down 2: coherent / up 3: incoherent / down 4: incoherent / up
MEG
Brain magnetic fields were recorded in a MSR using a 306 MEG system (Neuromag Elekta LTD, Helsinki). MEG recordings were sampled at 2 kHz and band-pass filtered between 0.03 and 600 Hz.
Four head position coils (HPI) measured the head position of participants before each block; three fiducial markers (nasion and pre-auricular points) were used for digitization and anatomicalMRI (aMRI) immediately following MEG acquisition.
Electrooculograms (EOG, horizontal and vertical eye movements) and electrocardiogram (ECG) were simultaneously recorded. Prior to the session, 5 min of empty room recordings was acquired for the computation of the noise covariance matrix.
Bad MEG channels were marked manually.
MRI
The T1 weighted aMRI was recorded using a 3-T Siemens Trio MRI scanner. Parameters of the sequence were: voxel size: 1.0 × 1.0 × 1.1 mm; acquisition time: 466 s; repetition time TR = 2300 ms; and echo time TE = 2.98 ms
References
Zilber, N., Ciuciu, P., Gramfort, A., Azizi, L., & Van Wassenhove, V. (2014). Supramodal processing optimizes visual perceptual learning and plasticity. Neuroimage, 93, 32-46.
Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896
Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. http://doi.org/10.1038/sdata.2018.110
Dataset Information#
Dataset ID |
|
Title |
NeuroSpin hMT+ Localizer DATA (MEG & aMRI) |
Year |
2020 |
Authors |
Nicolas Zilber, Philippe Ciuciu, Alexandre Gramfort, Leila Azizi, Virginie van Wassenhove |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds003392,
title = {NeuroSpin hMT+ Localizer DATA (MEG & aMRI)},
author = {Nicolas Zilber and Philippe Ciuciu and Alexandre Gramfort and Leila Azizi and Virginie van Wassenhove},
doi = {10.18112/openneuro.ds003392.v1.0.4},
url = {https://doi.org/10.18112/openneuro.ds003392.v1.0.4},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 11
Recordings: 159
Tasks: 2
Channels: 306 (22), 320 (22)
Sampling rate (Hz): 2000.0
Duration (hours): 0.0
Pathology: Healthy
Modality: Visual
Type: Perception
Size on disk: 10.1 GB
File count: 159
Format: BIDS
License: CC0
DOI: 10.18112/openneuro.ds003392.v1.0.4
API Reference#
Use the DS003392 class to access this dataset programmatically.
- class eegdash.dataset.DS003392(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds003392. Modality:meg; Experiment type:Perception; Subject type:Healthy. Subjects: 12; recordings: 33; tasks: 2.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds003392 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds003392
Examples
>>> from eegdash.dataset import DS003392 >>> dataset = DS003392(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset