DS007648: eeg dataset, 22 subjects#
CrossModal Study
Access recordings and metadata through EEGDash.
Citation: Marion Brickwedde, Rupali Limachya, Roksana Markiewicz, Emma Sutton, Christopher Postzich, Kimron Shapiro, Ole Jensen, Ali Mazaheri (2026). CrossModal Study. 10.18112/openneuro.ds007648.v1.1.0
Modality: eeg Subjects: 22 Recordings: 22 License: CC0 Source: openneuro
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS007648
dataset = DS007648(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS007648(cache_dir="./data", subject="01")
Advanced query
dataset = DS007648(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds007648,
title = {CrossModal Study},
author = {Marion Brickwedde and Rupali Limachya and Roksana Markiewicz and Emma Sutton and Christopher Postzich and Kimron Shapiro and Ole Jensen and Ali Mazaheri},
doi = {10.18112/openneuro.ds007648.v1.1.0},
url = {https://doi.org/10.18112/openneuro.ds007648.v1.1.0},
}
About This Dataset#
README
##Contact person marion.brickwedde@charite.de, ORCID: https://orcid.org/0000-0002-3461-038X
Overview
#Project Name “CrossModal Study”
View full README
README
##Contact person marion.brickwedde@charite.de, ORCID: https://orcid.org/0000-0002-3461-038X
Overview
#Project Name “CrossModal Study” ##Task Participants received cues to prepare for the target (auditory, visual or non-specific). Between cue and target were 3 seconds of task-irrelevant waiting time. For the duration of this interval, the fixation cross was frequency-tagged at 36 Hz and an amplitude modulated 40 Hz tone was played. The target was either a gabor-patch or a short tone. Participants were asked to differentiate one of 3 targets of each modality and indicate them with the arror buttons on the keyboard (e.g. low pitch tone left arrow button, middle pitch tone, arrow down button,…). In the auditory and the visual cuing condition, in 50 % of all trials, a random target of the uncued modality was also presented as distractor. In the non-specific cued condition, no distractors were presented, as the modality was unclear. For the task help participants saw at all times refer to the stimuli folder (task_help_crossmodal.pdf). For an illustration of the timing of the task refer to stimuli folder (task.png). For a detailed description of the task see: (https://doi.org/10.7554/eLife.106050.1). ##Task Instructions Welcome and thank you for taking part in our experiment! From this moment on, the experiment will roughly take 1 hour and 20 minutes. If anything is wrong, please contact the experimenter any time. In EEG-studies, it is important to be as MOTIONLESS as possible (even when moving the eyes or blinking) So please try to refrain from body movement and reduce blinking to a minimal level you can still feel comfortable with. You will have time to move and close your eyes in the breaks, so please use this time. Please fixate the cross in the center of the screen.’ After a short time, one of the following three symbols will appear: (symbols shown on screen) They will tell you the type of target you will need to attend to later. Subsequently, you will see a cross and hear a sound for about 3 seconds. They are not important for you. During this time, continue to focus the cross at the center of the screen. AFTER THIS, the target will appear. (Target tones are presented) You just heard the three target tones. In the task, you will have to decide which one was presented to you. The target could be either visual or auditory. You will not know which it is, before the target appears. (visual targets are presented) In the task, you will have to decide wheter it is straight or tilted left or right. Use the arrow buttons on your keyboard to identify the target that was presented to you. Visual: tilted to the left (LEFT ARROW), straight (DOWN ARROW) and tilted to the right (RIGHT ARROW). Auditory: low tone (LEFT ARROW), middle tone (DOWN ARROW) and high tone (RIGHT ARROW). Always identify the target you were asked to attend to, even if both visual and auditory targets are presented. If you aren not told to attend to a speficic target, only one target will be presented. Your answer is timed, please try to accurately identify the target, while also reacting as fast as you can. After the practicing trials, there will be 28 blocks until the experiment is finished. If you have any questions, please inform the experimenter. If you understand the task, you can press the SPACE bar to start the practicing trials. (After this 36 practice trials were presented, these are NOT present in the recorded EEG data)
Methods
Subjects
All participants were healthy controls. inclusion criteria: normal or corrected-to-normal vision, no history of psychiatric or neurological illness (self-report)
Task organization
6 conditions 1. non-specific cue -> auditory target 2. non-specific cue -> visual target 3. auditory cue/target -> no distractor 4. auditory cue/target -> visual distractor 5. visual cue/target -> no distractor 6. visual cue/target -> auditory distractor
were randomly ordered and balanced out over the experiment.
Additional data acquired
1 extra EOG channel to monitor eye-activity
Experimental location
Birmingham University, United Kingdom
Missing data
sub-01 aborted the experiment. sub-17 did not perform above chance level. Both Participants were exlucded from the experiment. Acquisition dates were anonymised (01.01.2019, the year in which the study was recorded). Acquisition time of day is correct.
Dataset Information#
Dataset ID |
|
Title |
CrossModal Study |
Author (year) |
— |
Canonical |
— |
Importable as |
|
Year |
2026 |
Authors |
Marion Brickwedde, Rupali Limachya, Roksana Markiewicz, Emma Sutton, Christopher Postzich, Kimron Shapiro, Ole Jensen, Ali Mazaheri |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds007648,
title = {CrossModal Study},
author = {Marion Brickwedde and Rupali Limachya and Roksana Markiewicz and Emma Sutton and Christopher Postzich and Kimron Shapiro and Ole Jensen and Ali Mazaheri},
doi = {10.18112/openneuro.ds007648.v1.1.0},
url = {https://doi.org/10.18112/openneuro.ds007648.v1.1.0},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 22
Recordings: 22
Tasks: 1
Channels: 64
Sampling rate (Hz): 500.0
Duration (hours): 17.096666666666668
Pathology: Not specified
Modality: —
Type: —
Size on disk: 7.0 GB
File count: 22
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds007648.v1.1.0
API Reference#
Use the DS007648 class to access this dataset programmatically.
- class eegdash.dataset.DS007648(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetCrossModal Study
- Study:
ds007648(OpenNeuro)- Author (year):
nan- Canonical:
—
Also importable as:
DS007648,nan.Modality:
eeg. Subjects: 22; recordings: 22; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds007648 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds007648 DOI: https://doi.org/10.18112/openneuro.ds007648.v1.1.0
Examples
>>> from eegdash.dataset import DS007648 >>> dataset = DS007648(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset