DS003800#

Auditory Gamma Entrainment

Access recordings and metadata through EEGDash.

Citation: Mojtaba Lahijanian, Mohammad Javad Sedghizadeh, Hamid Aghajan, Zahra Vahabi (2021). Auditory Gamma Entrainment. 10.18112/openneuro.ds003800.v1.0.0

Modality: eeg Subjects: 13 Recordings: 118 License: CC0 Source: openneuro Citations: 4.0

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS003800

dataset = DS003800(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS003800(cache_dir="./data", subject="01")

Advanced query

dataset = DS003800(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds003800,
  title = {Auditory Gamma Entrainment},
  author = {Mojtaba Lahijanian and Mohammad Javad Sedghizadeh and Hamid Aghajan and Zahra Vahabi},
  doi = {10.18112/openneuro.ds003800.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds003800.v1.0.0},
}

About This Dataset#

Introduction

This experiment was designed to entrain the brain oscillations through synthetic auditory stimulation conducted on a group of elderly suffering from dementia. Recently, gamma entrainment has been proposed and shown effective in improving several symptoms of Alzheimer’s Diseases (AD). The aim of this study is to investigate the effect of entrainment on brain oscillations using EEG signal recording during the auditory brain stimulation. This study was approved by the Review Board of Tehran University of Medical Sciences (Approval ID: IR.TUMS.MEDICINE.REC.1398.524) and all participants provided informed consent before participating and were free to withdraw at any time.

Rest data

Before the main task, a one-minute data was recorded with open eyes for measuring raw resting-state potentials. The rest data for participants number 6 and 13 are missing.

Auditory stimulation

Two speakers were placed in front of the participant 50cm apart from each other and directly pointed at the participant’s ears at a distance of 50cm. The sound intensity was around -40dB within a fixed range for all participants. Before starting the task, the participant was asked if the volume was loud enough and the sound volume was set at a comfortable level for each participant. The auditory stimulus was a 5kHz carrier tone amplitude modulated with a 40Hz rectangular wave (40Hz On and Off cycles). Since a 40Hz audio signal cannot be easily heard, the 5KHz carrier frequency was used to render the 40Hz pulse train audible. In order to minimize the effect of the carrier sound, the duty cycle of the modulating 40Hz waveform was set to 4% (1ms of the 25ms cycle was On). The auditory stimulant was generated in MATLAB and played as a .wav file. This file consisted of six trials of 40sec stimulus interleaved by five trials of 20sec rest (silence). The entire session resulted in 340sec (6*40+5*20) of recorded EEG signal.

EEG recording and preprocessing

All EEG data were recorded using 19 monopolar channels in the standard 10/20 system referenced to the earlobes, sampled at 250Hz, and the impedance of the electrodes was kept under 20kOhm. Data from all the participants were preprocessed identically following Makoto’s preprocessing pipeline: Highpass filtering above 1Hz; removal of the line noise; rejecting potential bad channels; interpolating rejected channels; re-referencing data to the average; Artifact Subspace Reconstruction (ASR); re-referencing data to the average again; estimating the brain source activity using independent component analysis (ICA); dipole fitting; rejecting bad dipoles (sources) for further cleaning the data. These preprocessing steps were performed using EEGLab MATLAB toolbox.

Instructions

During the experiment, participants were seated comfortably with open eyes in a quiet room. They were instructed to relax their body to avoid muscle artifacts and move their head as little as possible.

Dataset Information#

Dataset ID

DS003800

Title

Auditory Gamma Entrainment

Year

2021

Authors

Mojtaba Lahijanian, Mohammad Javad Sedghizadeh, Hamid Aghajan, Zahra Vahabi

License

CC0

Citation / DOI

10.18112/openneuro.ds003800.v1.0.0

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds003800,
  title = {Auditory Gamma Entrainment},
  author = {Mojtaba Lahijanian and Mohammad Javad Sedghizadeh and Hamid Aghajan and Zahra Vahabi},
  doi = {10.18112/openneuro.ds003800.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds003800.v1.0.0},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 13

  • Recordings: 118

  • Tasks: 2

Channels & sampling rate
  • Channels: 19

  • Sampling rate (Hz): 250.0

  • Duration (hours): 0.0

Tags
  • Pathology: Dementia

  • Modality: Auditory

  • Type: Clinical/Intervention

Files & format
  • Size on disk: 189.3 MB

  • File count: 118

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: 10.18112/openneuro.ds003800.v1.0.0

Provenance

API Reference#

Use the DS003800 class to access this dataset programmatically.

class eegdash.dataset.DS003800(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds003800. Modality: eeg; Experiment type: Clinical/Intervention; Subject type: Dementia. Subjects: 13; recordings: 24; tasks: 2.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds003800 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds003800

Examples

>>> from eegdash.dataset import DS003800
>>> dataset = DS003800(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#