DS003805#
Multisensory Gamma Entrainment
Access recordings and metadata through EEGDash.
Citation: Mojtaba Lahijanian, Mohammad Javad Sedghizadeh, Hamid Aghajan (2021). Multisensory Gamma Entrainment. 10.18112/openneuro.ds003805.v1.0.0
Modality: eeg Subjects: 1 Recordings: 10 License: CC0 Source: openneuro Citations: 3.0
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS003805
dataset = DS003805(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS003805(cache_dir="./data", subject="01")
Advanced query
dataset = DS003805(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds003805,
title = {Multisensory Gamma Entrainment},
author = {Mojtaba Lahijanian and Mohammad Javad Sedghizadeh and Hamid Aghajan},
doi = {10.18112/openneuro.ds003805.v1.0.0},
url = {https://doi.org/10.18112/openneuro.ds003805.v1.0.0},
}
About This Dataset#
Introduction
This experiment was designed to study the effects of different sensory modalities (auditory, visual, and audio-visual) on brain entrainment. The EEG data was collected from a young healthy volunteer (23 years old male). Recently, gamma entrainment based on individual (auditory or visual) sensory stimulation as well as simultaneous auditory and visual stimulation have been proposed and shown effective in improving several symptoms of Alzheimer’s Diseases (AD) in mice and humans. The aim of this study is to investigate the effect of different modalities in producing synchronized brain oscillations. The task is composed of three epochs of auditory, visual, and audio-visual stimulations respectively, each lasting for 40sec in one session.
Auditory stimulation
Two speakers were placed in front of the participant 50cm apart from each other and directly pointed at the participant’s ears at a distance of 50cm. The sound intensity was set to around -40dB. Before starting the task, the participant was asked if the volume was loud enough and the sound volume was set at a comfortable level for him. The auditory stimulus was a 5kHz carrier tone amplitude modulated with a 40Hz rectangular wave (40Hz On and Off cycles). Since a 40Hz audio signal cannot be easily heard, the 5KHz carrier frequency was used to render the 40Hz pulse train audible. In order to minimize the effect of the carrier sound, the duty cycle of the modulating 40Hz waveform was set to 4% (1ms of the 25ms cycle was On). The auditory stimulant was generated in MATLAB and played as a .wav file. This file consisted of 40sec of stimulus.
Visual stimulation
The visual stimulant was a 20Hz flickering white light produced by an array of LEDs and reflected from a white wall at 50cm distance in front of the participant (open eyes) with 50% On cycles (duty cycle = 50%) flickering for 40sec. Due to the presence of harmonic frequencies in the pulse train of the stimulus, the 20Hz stimulant is able to drive 40Hz oscillations in the brain.
EEG recording and preprocessing
The EEG data were recorded using 19 monopolar channels in the standard 10/20 system referenced to the earlobes, sampled at 500Hz, and the impedance of the electrodes was kept under 20kOhm. Data from all three epochs were preprocessed identically following Makoto’s preprocessing pipeline: Highpass filtering above 1Hz; removal of the line noise; rejecting potential bad channels; interpolating rejected channels; re-referencing data to the average; Artifact Subspace Reconstruction (ASR); re-referencing data to the average again; estimating the brain source activity using independent component analysis (ICA); dipole fitting; rejecting bad dipoles (sources) for further cleaning the data. These preprocessing steps were performed using EEGLab MATLAB toolbox.
Instructions
During the experiment, participant was seated comfortably with open eyes in a quiet room. He was instructed to relax his body to avoid muscle artifacts and move his head as little as possible. The participant was free to take a rest after each epoch but the EEG cap was not taken off.
Dataset Information#
Dataset ID |
|
Title |
Multisensory Gamma Entrainment |
Year |
2021 |
Authors |
Mojtaba Lahijanian, Mohammad Javad Sedghizadeh, Hamid Aghajan |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds003805,
title = {Multisensory Gamma Entrainment},
author = {Mojtaba Lahijanian and Mohammad Javad Sedghizadeh and Hamid Aghajan},
doi = {10.18112/openneuro.ds003805.v1.0.0},
url = {https://doi.org/10.18112/openneuro.ds003805.v1.0.0},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 1
Recordings: 10
Tasks: 1
Channels: 19
Sampling rate (Hz): 500.0
Duration (hours): 0.0
Pathology: Not specified
Modality: —
Type: —
Size on disk: 8.8 MB
File count: 10
Format: BIDS
License: CC0
DOI: 10.18112/openneuro.ds003805.v1.0.0
API Reference#
Use the DS003805 class to access this dataset programmatically.
- class eegdash.dataset.DS003805(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds003805. Modality:eeg; Experiment type:Learning; Subject type:Healthy. Subjects: 1; recordings: 1; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds003805 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds003805
Examples
>>> from eegdash.dataset import DS003805 >>> dataset = DS003805(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset