DS005795#

MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data)

Access recordings and metadata through EEGDash.

Citation: Jörg Stadler, Torsten Stöter, Nicole Angenstein, Andreas Fügner, Marcel Lommerzheim, Artur Mathysiak, Anke Michalsky, Gabriele Schöps, Johann van der Meer, Susann Wolff, André Brechmann (2025). MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data). 10.18112/openneuro.ds005795.v1.0.0

Modality: eeg Subjects: 34 Recordings: 634 License: CC0 Source: openneuro

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS005795

dataset = DS005795(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS005795(cache_dir="./data", subject="01")

Advanced query

dataset = DS005795(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds005795,
  title = {MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data)},
  author = {Jörg Stadler and Torsten Stöter and Nicole Angenstein and Andreas Fügner and Marcel Lommerzheim and Artur Mathysiak and Anke Michalsky and Gabriele Schöps and Johann van der Meer and Susann Wolff and André Brechmann},
  doi = {10.18112/openneuro.ds005795.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds005795.v1.0.0},
}

About This Dataset#

Overview

The study comprises data of a combined fMRI/EEG experiment. The EEG files contain 63 head channels, ECG, EOG, facial EMG and skin conductance data. A physio file contains respiration and finger-pulse data. In addition, a T1 weighted whole-brain anatomical MR scan, a PD weighted (UTE) scan for electrode localization is provided (defacing was performed using cbinyu/pydeface). Additional data of the participants (T2 weighted images, button press dynamics, hearing threshold, hearing abilities, and personality traits (NEO-FFI, BIS/BAS, SVF, ERQ, MMG) are available on request. The study was conducted at the Combinatorial NeuroImaging (CNI) core facility of the Leibniz Institute for Neurobiology (LIN) Magdeburg and was approved by the ethics committee of the University of Magdeburg, Germany. All participants gave written informed consent. Currently you will only find 5 data-sets that include the multi-dimensional category learning experiment (cf. Wolff & Brechmann, Cerebral Cortex, 2023) because of the copyright policy of OpenNeuro (i.e. CC0). If you are interested in the remaining data-sets, please contact brechmann@lin-magdeburg.de. Collaboration is highly welcome!

Details of the learning task

The auditory category learning experiment comprised 180 trials for which 160 different frequency modulated sounds were presented in pseudo-randomized order with a jittered inter-trial interval of 6, 8, or 10 s plus 19-95 ms in steps of 19 ms in order to ensure a pseudo-random jitter of the sound onset with the onset of the acquisition of an MR volume. Each sound had five different binary features, i.e. duration (short: 400 ms, long 800 ms), direction of the frequency modulation (rising, falling), intensity (soft: 76–81 dB, loud: 86–91 dB), speed of the frequency modulation (slow: 0.25 octaves/s, fast: 0.5 octaves/s), and frequency range (low: 500–831 Hz, high: 1630–2639 Hz with 5 different ranges each). Participants had to learn a target category defined by a combination of the features duration and direction (i.e. long/rising, long/falling, short/rising, or short/falling) by trial and error. In each trial, participants had to indicate via button press whether they thought a sound belonged to the target category (right index finger) or not (right middle finger). They received feedback about the correctness of the response by a prerecorded, female voice in standard German; e.g., “ja” (yes) or “richtig” (right) following correct responses, “nein” (no) or “falsch” (wrong) following incorrect responses. In 90% of the trials the feedback immediately followed the button press, in 10% it was delayed by 1500 ms. If participants failed to respond within 2 seconds after FM tone onset, a timeout feedback (“zu spät”, too late) was presented. During the ~27 min learning experiment, participants were asked to fixate a white cross on grey background and avoid any movements. For the 10 min rs-fMRI, they were asked to close their eyes.

Technical details

MR data were acquired with a 3 Tesla MRI scanner (Philips Achieva dStream) equipped with a 32-channel head coil. The MR scanner generates a trigger signal used to synchronize the multimodal data acquisition. The timing of stimulus events and the participants’ responses were controlled by the software Presentation (Neurobehavioral Systems) running on a Windows stimulation-PC. Auditory stimuli were presented via a Mark II+ (MR-Confon, Magdeburg, Germany) audio control unit to MR compatible electrodynamic headphones with integrated ear muffs that provide passive damping of ambient scanner noise by ~24 dB. Earplugs (Bilsom 303) further reduce the noise by ~29 dB (SNR). Button presses of the participants were recorded with the ResponseBox 2.0 by Covilex (Magdeburg, Germany) that includes a response pad with two buttons. The device delivers continuous 8-bit data at a sampling rate of 500 Hz. The Teensy converts left and right button presses that exceed a defined threshold into USB keyboard events handled by the stimulation-PC. Respiration and heart rate was recorded with Invivo MRI Sensors at a sampling rate of 100 Hz and stored on the MRI acquisition PC at 496 Hz sampling rate. 64-channel EEG (including ECG) was recorded at 5 kHz using two 32-channel amplifiers BrainAmp MRplus (Brain Products GmbH, Gilching, Germany). The amplifier’s discriminative resolution was set to 0.5 µV/bit (range of +/-16.38 mV) and the signals were hardware-filtered in the frequency band between 0.01 Hz and 250 Hz. A bipolar 16-channel amplifier BrainAmp ExG MR was used to record 2 EOG, 4 EMG (Corrugator, Zygomaticus) channels as well as signals from 4 carbon wire loops (CWL) for correcting pulse and motion related artifacts. Another BrainAmp ExG MR amplifier with an ExG AUX box was used to record the skin conductance (GSR) at the index finger of the participant’s non-dominant hand. All signals are synchronized with the MR trigger via a Sync box and two USB2 adapter. All data were recorded and stored with the BrainVision Recorder software. Preprocessing (MR-artifact correction, bandpass filtering between 0.3 and 125 Hz, downsampling to 500 Hz with subsequent CWL correction) and export of the EEG-data was performed in BrainVision Analyzer 2.3. Raw data for optimized artifact correction are available upon request.

Dataset Information#

Dataset ID

DS005795

Title

MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data)

Year

2025

Authors

Jörg Stadler, Torsten Stöter, Nicole Angenstein, Andreas Fügner, Marcel Lommerzheim, Artur Mathysiak, Anke Michalsky, Gabriele Schöps, Johann van der Meer, Susann Wolff, André Brechmann

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds005795.v1.0.0

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds005795,
  title = {MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data)},
  author = {Jörg Stadler and Torsten Stöter and Nicole Angenstein and Andreas Fügner and Marcel Lommerzheim and Artur Mathysiak and Anke Michalsky and Gabriele Schöps and Johann van der Meer and Susann Wolff and André Brechmann},
  doi = {10.18112/openneuro.ds005795.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds005795.v1.0.0},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 34

  • Recordings: 634

  • Tasks: 3

Channels & sampling rate
  • Channels: 72

  • Sampling rate (Hz): 500.0

  • Duration (hours): 0.0

Tags
  • Pathology: Healthy

  • Modality: Auditory

  • Type: Learning

Files & format
  • Size on disk: 6.4 GB

  • File count: 634

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds005795.v1.0.0

Provenance

API Reference#

Use the DS005795 class to access this dataset programmatically.

class eegdash.dataset.DS005795(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds005795. Modality: eeg; Experiment type: Learning; Subject type: Healthy. Subjects: 34; recordings: 39; tasks: 2.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds005795 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds005795

Examples

>>> from eegdash.dataset import DS005795
>>> dataset = DS005795(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#