NM000341: eeg dataset, 12 subjects#

Cattan2019-PHMD

Access recordings and metadata through EEGDash.

Citation: G. Cattan, P. L. C. Rodrigues, M. Congedo (2019). Cattan2019-PHMD. 10.5281/zenodo.2617084

Modality: eeg Subjects: 12 Recordings: 12 License: CC-BY-4.0 Source: nemar

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import NM000341

dataset = NM000341(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = NM000341(cache_dir="./data", subject="01")

Advanced query

dataset = NM000341(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{nm000341,
  title = {Cattan2019-PHMD},
  author = {G. Cattan and P. L. C. Rodrigues and M. Congedo},
  doi = {10.5281/zenodo.2617084},
  url = {https://doi.org/10.5281/zenodo.2617084},
}

About This Dataset#

Cattan2019-PHMD

Passive Head Mounted Display with Music Listening dataset [1]_.

Dataset Overview

Code: Cattan2019-PHMD Paradigm: rstate DOI: 10.5281/zenodo.2617084

View full README

Cattan2019-PHMD

Passive Head Mounted Display with Music Listening dataset [1]_.

Dataset Overview

Code: Cattan2019-PHMD Paradigm: rstate DOI: 10.5281/zenodo.2617084 Subjects: 12 Sessions per subject: 1 Events: off=1, on=2 Trial interval: [0, 1] s File format: mat and csv

Acquisition

Sampling rate: 512.0 Hz Number of channels: 16 Channel types: eeg=16 Channel names: Cz, Fc5, Fc6, Fp1, Fp2, Fz, O1, O2, Oz, P3, P4, P7, P8, Pz, T7, T8 Montage: standard_1020 Hardware: g.USBamp Software: OpenViBE Reference: right earlobe Ground: AFz Sensor type: wet Line frequency: 50.0 Hz Online filters: no digital filter Cap manufacturer: EasyCap Cap model: EC20 Electrode type: wet

Participants

Number of subjects: 12 Health status: healthy Age: mean=26.25, std=2.63 Gender distribution: male=9, female=3 Species: human

Experimental Protocol

Paradigm: rstate Number of classes: 2 Class labels: off, on Trial duration: 60.0 s Study design: focus on the marker and to listen to the music that was diffused during the experiment (Bach Invention from one to ten on harpsichord). Feedback type: none Stimulus type: visual fixation marker Stimulus modalities: visual, auditory Primary modality: auditory Training/test split: False Instructions: Subjects were asked to focus on the marker and to listen to the music that was diffused during the experiment

HED Event Annotations

Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser off

     ├─ Experiment-structure
     └─ Rest

on
├─ Experiment-structure
└─ Rest

Data Structure

Blocks per session: 10 Block duration: 60.0 s Trials context: 5 blocks with smartphone switched-off and 5 blocks with smartphone switched-on, randomized sequence

Preprocessing

Data state: raw, unfiltered Preprocessing applied: False Notes: Data were acquired with no digital filter. No Faraday cage used to mimic real-world usage.

BCI Application

Applications: vr_ar Environment: laboratory Online feedback: False

Tags

Pathology: Healthy Modality: EEG Type: Resting State

Documentation

Description: This dataset contains electroencephalographic recordings of 12 subjects listening to music with and without a passive head-mounted display DOI: 10.5281/zenodo.2617084 Associated paper DOI: 10.2312/vriphys.20181064 License: CC-BY-4.0 Investigators: G. Cattan, P. L. C. Rodrigues, M. Congedo Senior author: M. Congedo Institution: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP Address: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France Country: FR Repository: Zenodo Data URL: https://doi.org/10.5281/zenodo.2617084 Publication year: 2019 How to acknowledge: Python code for manipulating the data is available at https://github.com/plcrodrigues/py.PHMDML.EEG.2017-GIPSA Keywords: Electroencephalography (EEG), Virtual Reality (VR), Passive Head-Mounted Display (PHMD), experiment

Abstract

We describe the experimental procedures for a dataset that we have made publicly available at https://doi.org/10.5281/zenodo.2617084 in mat (Mathworks, Natick, USA) and csv formats. This dataset contains electroencephalographic recordings of 12 subjects listening to music with and without a passive head-mounted display, that is, a head-mounted display which does not include any electronics at the exception of a smartphone. The electroencephalographic headset consisted of 16 electrodes. Data were recorded during a pilot experiment taking place in the GIPSA-lab, Grenoble, France, in 2017. Python code for manipulating the data is available at https://github.com/plcrodrigues/py.PHMDML.EEG.2017-GIPSA. The ID of this dataset is PHMDML.EEG.2017-GIPSA.

Methodology

Subjects sat in front of screen at ~50 cm distance without instrumental noise-reduction devices. EEG cap and Samsung Gear were placed on subject. Smartphones were continuously swapped between switched-on and switched-off conditions. Each block consisted of 1 minute of EEG recording with eyes opened. The sequence of 10 blocks was randomized prior to experiment using random number generator with no autocorrelation. Triggers marked beginning of each block (1=switched-off, 2=switched-on).

References

G. Cattan, P. L. Coelho Rodrigues, and M. Congedo, ‘Passive Head-Mounted Display Music-Listening EEG dataset’, Gipsa-Lab ; IHMTEK, Research Report 2, Mar. 2019. doi: 10.5281/zenodo.2617084. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb

Dataset Information#

Dataset ID

NM000341

Title

Cattan2019-PHMD

Author (year)

Cattan2019_PHMD

Canonical

Importable as

NM000341, Cattan2019_PHMD

Year

2019

Authors

  1. Cattan, P. L. C. Rodrigues, M. Congedo

License

CC-BY-4.0

Citation / DOI

doi:10.5281/zenodo.2617084

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{nm000341,
  title = {Cattan2019-PHMD},
  author = {G. Cattan and P. L. C. Rodrigues and M. Congedo},
  doi = {10.5281/zenodo.2617084},
  url = {https://doi.org/10.5281/zenodo.2617084},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 12

  • Recordings: 12

  • Tasks: 1

Channels & sampling rate
  • Channels: 16

  • Sampling rate (Hz): 512.0

  • Duration (hours): 2.7361046006944445

Tags
  • Pathology: Healthy

  • Modality: Auditory

  • Type: Resting-state

Files & format
  • Size on disk: 231.3 MB

  • File count: 12

  • Format: BIDS

License & citation
  • License: CC-BY-4.0

  • DOI: doi:10.5281/zenodo.2617084

Provenance

API Reference#

Use the NM000341 class to access this dataset programmatically.

class eegdash.dataset.NM000341(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

Cattan2019-PHMD

Study:

nm000341 (NeMAR)

Author (year):

Cattan2019_PHMD

Canonical:

Also importable as: NM000341, Cattan2019_PHMD.

Modality: eeg; Experiment type: Resting-state; Subject type: Healthy. Subjects: 12; recordings: 12; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/nm000341 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000341 DOI: https://doi.org/10.5281/zenodo.2617084

Examples

>>> from eegdash.dataset import NM000341
>>> dataset = NM000341(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#