DS007640: meg dataset, 23 subjects#
Dataset of emotion recognition using validated video stimuli with large-scale behavioral survey and MEG recordings
Access recordings and metadata through EEGDash.
Citation: Moon-A Yoo, Dong-Uk Kim, Soo-In Choi, Min-Young Kim, Sung-Phil Kim (2026). Dataset of emotion recognition using validated video stimuli with large-scale behavioral survey and MEG recordings. 10.18112/openneuro.ds007640.v1.0.0
Modality: meg Subjects: 23 Recordings: 94 License: CC0 Source: openneuro
Metadata: Complete (90%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS007640
dataset = DS007640(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS007640(cache_dir="./data", subject="01")
Advanced query
dataset = DS007640(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds007640,
title = {Dataset of emotion recognition using validated video stimuli with large-scale behavioral survey and MEG recordings},
author = {Moon-A Yoo and Dong-Uk Kim and Soo-In Choi and Min-Young Kim and Sung-Phil Kim},
doi = {10.18112/openneuro.ds007640.v1.0.0},
url = {https://doi.org/10.18112/openneuro.ds007640.v1.0.0},
}
About This Dataset#
Dataset of Emotion Recognition Using Validated Video Stimuli with Large-scale Behavioral Survey and MEG Recordings
General Description
This dataset was developed as part of research focused on brain signal-based emotion recognition by capturing high-fidelity Magnetoencephalography (MEG) signals during induced different emotional states. The dataset is organized into three primary components: 1. Phenotype (Online Survey): Stored within the ‘phenotype’ directory, this component contains the results of a large-scale subjective emotional assessment of the set of 40 video stimuli. It includes responses from 500 participants, providing robust behavioral baseline. 2. Source Data (Head Digitization): Located in ‘sourcedata’ directory, this component contains head position information measured prior to each recording session. The resulting configuration files (‘.cfg’) store the 3D spatial coordinates (x, y, z) for anatomical landmarks and Head Position Indicator (HPI) coils, essential for accurate co-registration. 3. MEG Recording: Comprehensive MEG neural recordings from 23 participants, who viewed the same 40 validated video clips used in the online survey. These recordings enable the investigation of emotion-specific neural signal patterns and are organized into subject-specific directories (e.g., ‘sub-01’).
To capture the richness of emotional experiences, we employed a multi-faceted assessment paradigm. Beyond the standard Self-Assessment Manikin (SAM) responses for Valence and Arousal, our labels include discrete emotion categories (PrEmo) and temporal highlight scene selections, providing a more granular ‘ground truth’ for affective states.
Citation
For a detailed description of the stimulus selection, experimental design, and data acquisition process, please refer to the publication listed below. We kindly request that any research utilizing this dataset cites the following paper: [Add on Reference/DOI]
Dataset Information#
Dataset ID |
|
Title |
Dataset of emotion recognition using validated video stimuli with large-scale behavioral survey and MEG recordings |
Author (year) |
— |
Canonical |
— |
Importable as |
|
Year |
2026 |
Authors |
Moon-A Yoo, Dong-Uk Kim, Soo-In Choi, Min-Young Kim, Sung-Phil Kim |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds007640,
title = {Dataset of emotion recognition using validated video stimuli with large-scale behavioral survey and MEG recordings},
author = {Moon-A Yoo and Dong-Uk Kim and Soo-In Choi and Min-Young Kim and Sung-Phil Kim},
doi = {10.18112/openneuro.ds007640.v1.0.0},
url = {https://doi.org/10.18112/openneuro.ds007640.v1.0.0},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 23
Recordings: 94
Tasks: 4
Channels: Varies
Sampling rate (Hz): 1024.0
Duration (hours): Not calculated
Pathology: Not specified
Modality: —
Type: —
Size on disk: 88.8 GB
File count: 94
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds007640.v1.0.0
API Reference#
Use the DS007640 class to access this dataset programmatically.
- class eegdash.dataset.DS007640(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetDataset of emotion recognition using validated video stimuli with large-scale behavioral survey and MEG recordings
- Study:
ds007640(OpenNeuro)- Author (year):
nan- Canonical:
—
Also importable as:
DS007640,nan.Modality:
meg. Subjects: 23; recordings: 94; tasks: 4.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds007640 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds007640 DOI: https://doi.org/10.18112/openneuro.ds007640.v1.0.0
Examples
>>> from eegdash.dataset import DS007640 >>> dataset = DS007640(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset