DS005868#
Flankers-FAR
Access recordings and metadata through EEGDash.
Citation: Brennan Terhune-Cotter, Phillip J. Holcomb, Katherine J. Midgley, Sofia E. Ortega, Emily M. Akers, Karen Emmorey (2025). Flankers-FAR. 10.18112/openneuro.ds005868.v1.0.1
Modality: eeg Subjects: 48 Recordings: 246 License: CC0 Source: openneuro
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS005868
dataset = DS005868(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS005868(cache_dir="./data", subject="01")
Advanced query
dataset = DS005868(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds005868,
title = {Flankers-FAR},
author = {Brennan Terhune-Cotter and Phillip J. Holcomb and Katherine J. Midgley and Sofia E. Ortega and Emily M. Akers and Karen Emmorey},
doi = {10.18112/openneuro.ds005868.v1.0.1},
url = {https://doi.org/10.18112/openneuro.ds005868.v1.0.1},
}
About This Dataset#
Data collection took place at the NeuroCognition Laboratory (NCL) in San Diego, California under the supervision of Dr. Phillip Holcomb and Dr. Karen Emmorey. This project followed the San Diego State University’s IRB guidelines.
Participants sat in a comfortable chair in a darkened, sound-attenuated room throughout the experiment. They were given a game controller for responding to stimuli. They were instructed to watch the 24in-LCD video monitor, which was placed at a viewing distance of 60 in (152 cm).
Participants were presented with 90 four-letter real words and 90 four-letter pseudowords in white New Courier font on a black background. Each letter subtended .41 degrees of visual angle. The flanker words were separated from the center target word by 3.28 degrees of empty space on both sides. All targets and flankers were content words under a 6th grade reading level; plural words and proper nouns were excluded. All words were presented once in each of the three conditions: no flanker, identical flankers, or different flankers. There were 270 trials. Trials started with a purple fixation cross for 1000ms, followed by a white fixation cross for 500ms to prepare participants for the presentation of the stimulus. The stimulus item was then presented for 150ms, followed by a blank screen shown until participants responded via the game controller.
Dataset Information#
Dataset ID |
|
Title |
Flankers-FAR |
Year |
2025 |
Authors |
Brennan Terhune-Cotter, Phillip J. Holcomb, Katherine J. Midgley, Sofia E. Ortega, Emily M. Akers, Karen Emmorey |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds005868,
title = {Flankers-FAR},
author = {Brennan Terhune-Cotter and Phillip J. Holcomb and Katherine J. Midgley and Sofia E. Ortega and Emily M. Akers and Karen Emmorey},
doi = {10.18112/openneuro.ds005868.v1.0.1},
url = {https://doi.org/10.18112/openneuro.ds005868.v1.0.1},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 48
Recordings: 246
Tasks: 1
Channels: 32
Sampling rate (Hz): 500.0
Duration (hours): 0.0
Pathology: Healthy
Modality: Visual
Type: Attention
Size on disk: 2.9 GB
File count: 246
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds005868.v1.0.1
API Reference#
Use the DS005868 class to access this dataset programmatically.
- class eegdash.dataset.DS005868(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds005868. Modality:eeg; Experiment type:Attention; Subject type:Healthy. Subjects: 48; recordings: 48; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds005868 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds005868
Examples
>>> from eegdash.dataset import DS005868 >>> dataset = DS005868(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset