DS000117#

Multisubject, multimodal face processing

Access recordings and metadata through EEGDash.

Citation: Wakeman, DG, Henson, RN (2018). Multisubject, multimodal face processing. 10.18112/openneuro.ds000117.v1.1.0

Modality: meg Subjects: 16 Recordings: 1156 License: CC0 Source: openneuro Citations: 77.0

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS000117

dataset = DS000117(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS000117(cache_dir="./data", subject="01")

Advanced query

dataset = DS000117(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds000117,
  title = {Multisubject, multimodal face processing},
  author = {Wakeman, DG and Henson, RN},
  doi = {10.18112/openneuro.ds000117.v1.1.0},
  url = {https://doi.org/10.18112/openneuro.ds000117.v1.1.0},
}

About This Dataset#

This dataset was obtained from the OpenNeuro project (https://www.openneuro.org). Accession #: ds000117

The same dataset is also available here: ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/, but in a non-BIDS format (which may be easier to download by subject rather than by modality)

Note that it is a subset of the data available on OpenfMRI (http://www.openfmri.org; Accession #: ds000117).

Description: Multi-subject, multi-modal (sMRI+fMRI+MEG+EEG) neuroimaging dataset on face processing

Please cite the following reference if you use these data:

View full README

This dataset was obtained from the OpenNeuro project (https://www.openneuro.org). Accession #: ds000117

The same dataset is also available here: ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/, but in a non-BIDS format (which may be easier to download by subject rather than by modality)

Note that it is a subset of the data available on OpenfMRI (http://www.openfmri.org; Accession #: ds000117).

Description: Multi-subject, multi-modal (sMRI+fMRI+MEG+EEG) neuroimaging dataset on face processing

Please cite the following reference if you use these data:

Wakeman, D.G. & Henson, R.N. (2015). A multi-subject, multi-modal human neuroimaging dataset. Sci. Data 2:150001 doi: 10.1038/sdata.2015.1

The data have been used in several publications including, for example:

Henson, R.N., Abdulrahman, H., Flandin, G. & Litvak, V. (2019). Multimodal integration of M/EEG and f/MRI data in SPM12. Frontiers in Neuroscience, Methods, 13, 300.

Henson, R.N., Wakeman, D.G., Litvak, V. & Friston, K.J. (2011). A Parametric Empirical Bayesian framework for the EEG/MEG inverse problem: generative models for multisubject and multimodal integration. Frontiers in Human Neuroscience, 5, 76, 1-16.

Chapter 42 of the SPM12 manual (http://www.fil.ion.ucl.ac.uk/spm/doc/manual.pdf)

(see ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/Publications for full list), as well as the BioMag2010 data competition and the Kaggle competition: https://www.kaggle.com/c/decoding-the-human-brain)

func/

Unlike in v1-v3 of this dataset, the first two (dummy) volumes have now been removed (as stated in *.json), so event onset times correctly refer to t=0 at start of third volume

Note that, owing to scanner error, Subject 10 only has 170 volumes in last run (Run 9)

meg/

Three anatomical fiducials were digitized for aligning the MEG with the MRI: the nasion (lowest depression between the eyes) and the left and right ears (lowest depression between the tragus and the helix, above the tragus). This procedure is illustrated here: http://neuroimage.usc.edu/brainstorm/CoordinateSystems#Subject_Coordinate_System_.28SCS_.2F_CTF.29 and in task-facerecognition_fidinfo.pdf

The following triggers are included in the .fif files and are also used in the “trigger” column of the meg and bold events files:

Trigger Label Simplified Label

5 Initial Famous Face IniFF 6 Immediate Repeat Famous Face ImmFF 7 Delayed Repeat Famous Face DelFF 13 Initial Unfamiliar Face IniUF 14 Immediate Repeat Unfamiliar Face ImmUF 15 Delayed Repeat Unfamiliar Face DelUF 17 Initial Scrambled Face IniSF 18 Immediate Repeat Scrambled Face ImmSF 19 Delayed Repeat Scrambled Face DelSF

stimuli/meg/

The .bmp files correspond to those described in the text. There are 6 additional images in this directory, which were used in the practice experiment to familiarize participants with the task (hence some more BIDS validator warnings)

stimuli/mri/

The .bmp files correspond to those described in the text.

Defacing

Defacing of MPRAGE T1 images was performed by the submitter. A subset of subjects have given consent for non-defaced versions to be shared - in which case, please contact rik.henson@mrc-cbu.cam.ac.uk.

Quality Control

Mriqc was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/latest/

Known Issues

N/A

Relationship of Subject Numbering relative to other versions of Dataset

There are multiple versions of the dataset available on the web (see notes above), and these entailed a renumbering of the subjects for various reasons. Here are all the versions and how to match subjects between them (plus some rationale and history for different versions):

  1. Original Paper (N=19): Wakeman & Henson (2015): doi:10.1038/sdata.2015.1

    Number refers to order that tested (and some, eg 4, 7, 13 etc were excluded for not completing both MRI and MEG sessions)

  2. openfMRI, renumbered from paper: http://openfmri.org/s3-browser/?prefix=ds000117/ds000117_R0.1.1/uncompressed/

    Numbers 1-19 just made contiguous

  3. FTP subset of N=16: ftp: ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/

    This set was used for SPM Courses Designed to illustrate multimodal integration, so wanted good MRI+MEG+EEG data for all subjects Removed original subject_01 and subject_06 because bad EEG data; subject_19 because poor EEG and fMRI data (And renumbered subject_14 for some reason).

  4. Current OpenNeuro subset N=16 used for (BIDS): https://openneuro.org/datasets/ds000117

    OpenNeuro was rebranding of openfMRI, and enforced BIDS format Since this version designed to illustrate multi-modal BIDS, kept same numbering as FTP

W&H2015 openfMRI FTP openNeuro ======== ====== === ======= subject_01 sub001 subject_02 sub002 Sub01 sub-01 subject_03 sub003 Sub02 sub-02 subject_05 sub004 Sub03 sub-03 subject_06 sub005 subject_08 sub006 Sub05 sub-05 subject_09 sub007 Sub06 sub-06 subject_10 sub008 Sub07 sub-07 subject_11 sub009 Sub08 sub-08 subject_12 sub010 Sub09 sub-09 subject_14 sub011 Sub04 sub-04 subject_15 sub012 Sub10 sub-10 subject_16 sub013 Sub11 sub-11 subject_17 sub014 Sub12 sub-12 subject_18 sub015 Sub13 sub-13 subject_19 sub016 subject_23 sub017 Sub14 sub-14 subject_24 sub018 Sub15 sub-15 subject_25 sub019 Sub16 sub-16

Dataset Information#

Dataset ID

DS000117

Title

Multisubject, multimodal face processing

Year

2018

Authors

Wakeman, DG, Henson, RN

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds000117.v1.1.0

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds000117,
  title = {Multisubject, multimodal face processing},
  author = {Wakeman, DG and Henson, RN},
  doi = {10.18112/openneuro.ds000117.v1.1.0},
  url = {https://doi.org/10.18112/openneuro.ds000117.v1.1.0},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 16

  • Recordings: 1156

  • Tasks: 2

Channels & sampling rate
  • Channels: 394

  • Sampling rate (Hz): 1100.0

  • Duration (hours): 0.0

Tags
  • Pathology: Healthy

  • Modality: Visual

  • Type: Perception

Files & format
  • Size on disk: 87.6 GB

  • File count: 1156

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds000117.v1.1.0

Provenance

API Reference#

Use the DS000117 class to access this dataset programmatically.

class eegdash.dataset.DS000117(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds000117. Modality: meg; Experiment type: Perception; Subject type: Healthy. Subjects: 17; recordings: 104; tasks: 2.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds000117 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds000117

Examples

>>> from eegdash.dataset import DS000117
>>> dataset = DS000117(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#