DS005594#
Alphabetic Decision Task (Arial Light Font)
Access recordings and metadata through EEGDash.
Citation: Jack E. Taylor, Rasmus Sinn, Cosimo Iaia, Christian J. Fiebach (2024). Alphabetic Decision Task (Arial Light Font). 10.18112/openneuro.ds005594.v1.0.3
Modality: eeg Subjects: 16 Recordings: 133 License: CC0 Source: openneuro Citations: 1.0
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS005594
dataset = DS005594(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS005594(cache_dir="./data", subject="01")
Advanced query
dataset = DS005594(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds005594,
title = {Alphabetic Decision Task (Arial Light Font)},
author = {Jack E. Taylor and Rasmus Sinn and Cosimo Iaia and Christian J. Fiebach},
doi = {10.18112/openneuro.ds005594.v1.0.3},
url = {https://doi.org/10.18112/openneuro.ds005594.v1.0.3},
}
About This Dataset#
Generated from raw data by MNE-BIDS (Appelhoff et al., 2019) and custom code to join to behavioural data, stimulus information, and metadata.
Notes on the Data
For full details on this dataset, see our preprint: Taylor et al. (2024) https://doi.org/10.1101/2024.11.11.622929
General notes:
* An issue during recording meant that sub-05 completed the first block without data being saved. The experiment was restarted from the beginning for this participant. This participant was not included in our analyses, but the data are included in this dataset. They are also identified with the recording_restarted field in participants.tsv.
* A separate issue during recording meant that EEG data for some trials were lost for sub-01, though enough trials were recorded in total to meet our criteria for inclusion in the analysis. The raw data comprised two separate recordings. In this dataset, the two recordings are concatenated end-to-end into one file. The point at which the files are joined is marked with a boundary event. This participant is identified with the recording_interrupted field in participants.tsv.
* During the course of the experiment, we identified an issue with the wiring in one splitter box, which meant that voltages from channels FT7 and FC3 were swapped in the raw recorded data. We elected to keep the wiring as it was for the duration of the experiment, and then swapped the data from the two channels in the code that generated this BIDS dataset. This means that this issue has been corrected in this BIDS version of the data.
* “BAD” periods (MNE term) for key presses and break periods are included in the events files.
* Recording dates/times have been anonymised by shifting all recordings backwards in time by a constant number of days (same constant for all participants). This obscures information that may be used to identify participants, but preserves time-of-day information, and the relative times elapsed between different recordings.
References
Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8
Dataset Information#
Dataset ID |
|
Title |
Alphabetic Decision Task (Arial Light Font) |
Year |
2024 |
Authors |
Jack E. Taylor, Rasmus Sinn, Cosimo Iaia, Christian J. Fiebach |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds005594,
title = {Alphabetic Decision Task (Arial Light Font)},
author = {Jack E. Taylor and Rasmus Sinn and Cosimo Iaia and Christian J. Fiebach},
doi = {10.18112/openneuro.ds005594.v1.0.3},
url = {https://doi.org/10.18112/openneuro.ds005594.v1.0.3},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 16
Recordings: 133
Tasks: 1
Channels: 64 (16), 66 (16)
Sampling rate (Hz): 1000.0
Duration (hours): 0.0
Pathology: Not specified
Modality: —
Type: —
Size on disk: 10.9 GB
File count: 133
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds005594.v1.0.3
API Reference#
Use the DS005594 class to access this dataset programmatically.
- class eegdash.dataset.DS005594(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds005594. Modality:eeg; Experiment type:Unknown; Subject type:Unknown. Subjects: 16; recordings: 16; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds005594 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds005594
Examples
>>> from eegdash.dataset import DS005594 >>> dataset = DS005594(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset