DS003846#
Prediction Error
Access recordings and metadata through EEGDash.
Citation: Lukas Gehrke, Sezen Akman, Albert Chen, Pedro Lopes, Klaus Gramann (2021). Prediction Error. 10.18112/openneuro.ds003846.v2.0.2
Modality: eeg Subjects: 19 Recordings: 325 License: CC0 Source: openneuro Citations: 5.0
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS003846
dataset = DS003846(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS003846(cache_dir="./data", subject="01")
Advanced query
dataset = DS003846(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds003846,
title = {Prediction Error},
author = {Lukas Gehrke and Sezen Akman and Albert Chen and Pedro Lopes and Klaus Gramann},
doi = {10.18112/openneuro.ds003846.v2.0.2},
url = {https://doi.org/10.18112/openneuro.ds003846.v2.0.2},
}
About This Dataset#
Readme
In case of any questions, please contact: Lukas Gehrke, lukas.gehrke@tu-berlin.de, orcid: 0000-0003-3661-1973
Overview
View full README
Readme
In case of any questions, please contact: Lukas Gehrke, lukas.gehrke@tu-berlin.de, orcid: 0000-0003-3661-1973
Overview
Cyber-Physical Systems: Prediction Error
These data were collected at https://www.tu.berlin/bpn. Data collection occurred either between 10:00 and 12:00 or between 14:00 and 18:00.
To learn about the task, independent-, dependent-, and control variables, please consult the methods sections of the following two publications:
https://dl.acm.org/doi/abs/10.1145/3290605.3300657 https://iopscience.iop.org/article/10.1088/1741-2552/ac69bc/meta
Contents of the dataset: Output from BIDS-validator
Summary 324 Files, 9.76GB 19 - Subjects 5 - Sessions
Available Tasks PredictionError
Available Modalities EEG
Quality assessment of the data: Link to data paper, once done
Methods
Subjects
The study sample consists of 19 participants (participant_id 1 to 19) with ages ranging from 18 to 34 years and varying cap sizes from 54 to 60. Stimulation is delivered in three blocks: Block_1, Block_2, and Block_3, utilizing different combinations of Visual, Vibro, and EMS.
Participant Information: Age: Ranges from 18 to 34 years. Cap Size: Varied, with sizes ranging from 54 to 60. Stimulation Blocks: Block_1 and Block_2 include Visual, Visual + Vibro, and Visual + Vibro + EMS. Block_3 primarily involves Visual + Vibro + EMS. Usage of Stimulation Blocks: Most participants experience Visual stimulation in all blocks. Visual + Vibro is common in Block_1 and Block_2. Visual + Vibro + EMS is prevalent in Block_3. Some participants did not experience certain blocks (indicated by “0”). Other Observations: Cap size variation doesn’t show a clear pattern in relation to stimulation blocks. Participants exhibit diverse stimulation patterns, showcasing individualized experiences.
Task, Environment and Variables
This set of variables outlines key parameters in a neuroscience experiment involving a haptic task. Here’s a summary:
box: Description: Represents the target object to be touched following its spawn. Units: String (presumably indicating the characteristics or identity of the object). normal_or_conflict: Description: Describes the behavior of the target object in the current trial, distinguishing between oddball and non-oddball conditions. Units: String (presumably indicating the nature of the trial). condition: Description: Indicates the level of haptic realism in the experiment. Units: String (presumably representing different levels of realism). cube: Description: Specifies the position of the target object, whether it is located on the left, right, or center. Units: String (presumably indicating spatial orientation). trial_nr: Description: Denotes the number of the current trial in the experiment. Units: Integer.
Apparatus
Here’s a summary of the recording environment:
EEG Stream Name: BrainVision
EEG Reference and Ground: FCz and AFz, respectively
EEG Channel Locations: 63 channels with specific names (e.g., Fp1, Fz, Pz) and types (EEG)
Additional Channels: 1 EOG (Electrooculogram)
Power Line Frequency: 50 Hz
Manufacturer: Brain Products
Manufacturer’s Model Name: BrainAmp DC
Cap Manufacturer: EasyCap
Cap Model Name: actiCap 64ch CACS-64
EEG Placement Scheme: Positions chosen from a 10% system
Channel Counts: - EEG Channels: 63 - EOG Channels: 1 - ECG Channels: 0 - EMG Channels: 0 - Miscellaneous Channels: 0 - Trigger Channels: 0
This configuration indicates a high-density EEG setup with specific electrode placements, utilizing Brain Products’ BrainAmp DC model. The electrode cap is manufactured by EasyCap, with the specific model name actiCap 64ch CACS-64. The EEG data is sampled at an unspecified frequency, and the system is designed to capture electrical brain activity across a comprehensive set of channels. The recording includes an additional channel for recording eye movements (EOG). Overall, the setup appears suitable for detailed EEG investigations in neurophysiological research.
The motion capture recording environment uses two devices: “rigid_head” and “rigid_handr,” which correspond to “HTCViveHead” and “HTCViveRightHand” in the BIDS (Brain Imaging Data Structure) naming convention. The tracked points include “Head” and “handR.” The motion data is captured using quaternions with channels named “quat_X,” “quat_Y,” “quat_Z,” and “quat_W.” Positional data includes channels “_X,” “_Y,” and “_Z.” The system is manufactured by HTC, with the model name “Vive,” and the recording has a sampling frequency of 90 Hz. Additional information such as software versions is not provided.
Dataset Information#
Dataset ID |
|
Title |
Prediction Error |
Year |
2021 |
Authors |
Lukas Gehrke, Sezen Akman, Albert Chen, Pedro Lopes, Klaus Gramann |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds003846,
title = {Prediction Error},
author = {Lukas Gehrke and Sezen Akman and Albert Chen and Pedro Lopes and Klaus Gramann},
doi = {10.18112/openneuro.ds003846.v2.0.2},
url = {https://doi.org/10.18112/openneuro.ds003846.v2.0.2},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 19
Recordings: 325
Tasks: 1
Channels: 63 (50), 64 (50)
Sampling rate (Hz): 500.0
Duration (hours): 0.0
Pathology: Not specified
Modality: —
Type: —
Size on disk: 9.8 GB
File count: 325
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds003846.v2.0.2
API Reference#
Use the DS003846 class to access this dataset programmatically.
- class eegdash.dataset.DS003846(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds003846. Modality:eeg; Experiment type:Decision-making; Subject type:Healthy. Subjects: 19; recordings: 50; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds003846 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds003846
Examples
>>> from eegdash.dataset import DS003846 >>> dataset = DS003846(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset