DS004105#

BCIT Auditory Cueing

Access recordings and metadata through EEGDash.

Citation: Javier Garcia (data), Justin Brooks (data), Scott Kerick (data), Tony Johnson (data and curation), Tim Mullen (data), Jean Vettel (data), Jonathan Touryan (curation), Kay Robbins (curation) (2022). BCIT Auditory Cueing. 10.18112/openneuro.ds004105.v1.0.0

Modality: eeg Subjects: 17 Recordings: 245 License: CC0 Source: openneuro Citations: 0.0

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS004105

dataset = DS004105(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS004105(cache_dir="./data", subject="01")

Advanced query

dataset = DS004105(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds004105,
  title = {BCIT Auditory Cueing},
  author = {Javier Garcia (data) and Justin Brooks (data) and Scott Kerick (data) and Tony Johnson (data and curation) and Tim Mullen (data) and Jean Vettel (data) and Jonathan Touryan (curation) and Kay Robbins (curation)},
  doi = {10.18112/openneuro.ds004105.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds004105.v1.0.0},
}

About This Dataset#

Introduction

Overview: Subjects in the Auditory Cueing study performed a long-duration simulated driving task with perturbations and audio stimuli in a visually sparse environment.

The purpose of this effort was to supplement and extend the related driving research to collect prolonged time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers.

View full README

Introduction

Overview: Subjects in the Auditory Cueing study performed a long-duration simulated driving task with perturbations and audio stimuli in a visually sparse environment.

The purpose of this effort was to supplement and extend the related driving research to collect prolonged time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers.

Similar to the Baseline Driving study, the Auditory Cueing study was intended to identify periods of driver fatigue via predictive algorithms formulated from the analysis of driver EEG data, in comparison to the objective performance measures, and in contrast with the (non-fatigued) Calibration driving session for the subject. Auditory Cueing extended the Baseline Driving paradigm by adding predictive and non-predictive (random) pre-perturbation onset audio cues and increasing the frequency and magnitude of perturbation events vs. baseline driving. Further information is available on request from cancta.net_.

Methods

Subjects: Volunteers from the local community recruited through advertisements.

Apparatus: Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI); Video Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz); EEG (BioSemi 64 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=2048 Hz); Eye Tracking (Sensomotoric Instruments (SMI); REDEYE250).

Initial setup: Upon arrival to the lab, subjects were given an introduction to the primary study for which they were recruited and provided informed consent and provided demographics information. This was followed by a practice session, to acclimate the subject to the driving simulator. The driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control was demonstrated and lack of motion sickness was reported. Subjects were then outfitted and prepped for eye tracking and EEG acquisition.

Task organization within the study: Subjects always began recording sessions by performing a Calibration Driving task, which was a 15-minute drive where the subject controlled only the steering (and speed was controlled by the simulator). Following this, subjects would perform Auditory Cueing condition A and Auditory Cueing condition B, with counter-balancing used across subjects as to which of them came first. This study only contains the Auditory Cueing portion of the study.

Auditory cueing task details: Auditory Cueing A was 45 minutes of continuous driving, with subjects responsible for steering and maintaining speed, while a tone was played periodically at random. Auditory Cueing B was similar, but the tones were correlated with the onset of a perturbation event. Both driving tasks were conducted on the same simulated long, straight road. In each case, the subject was instructed to stay within the boundaries of the right-most lane, and to drive at the posted speed limits.

The vehicle was periodically subject to lateral perturbing forces, which could be applied to either side of the vehicle, pushing the vehicle out of the center of the lane; and the subject was instructed to execute corrective steering actions to return the vehicle to the center of the lane.

Independent variables: Auditory Cue (randomly presented before perturbation vs. predictive)

Dependent variables: Reaction times to perturbations, continuous performance based on vehicle log (steering wheel angle, lane position, heading error, etc.), reaction times to target vehicles (police), Task-Induced Fatigue Scale (TIFS), Karolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F).

Note: Questionnaire data is available upon request from cancta.net_.

Additional data acquired: Participant Enrollment Questionnaire, Subject Questionnaire for Current Session, Simulator Sickness Questionnaire.

Experimental Location: Teledyne Corporation, Durham, NC.

Note: This dataset has a corresponding dataset in the BCIT Calibration Driving ds004118 which has the 15 minute driving task performed prior to this one.

Dataset Information#

Dataset ID

DS004105

Title

BCIT Auditory Cueing

Year

2022

Authors

Javier Garcia (data), Justin Brooks (data), Scott Kerick (data), Tony Johnson (data and curation), Tim Mullen (data), Jean Vettel (data), Jonathan Touryan (curation), Kay Robbins (curation)

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds004105.v1.0.0

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds004105,
  title = {BCIT Auditory Cueing},
  author = {Javier Garcia (data) and Justin Brooks (data) and Scott Kerick (data) and Tony Johnson (data and curation) and Tim Mullen (data) and Jean Vettel (data) and Jonathan Touryan (curation) and Kay Robbins (curation)},
  doi = {10.18112/openneuro.ds004105.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds004105.v1.0.0},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 17

  • Recordings: 245

  • Tasks: 1

Channels & sampling rate
  • Channels: 74

  • Sampling rate (Hz): 1024.0

  • Duration (hours): 0.0

Tags
  • Pathology: Healthy

  • Modality: Multisensory

  • Type: Attention

Files & format
  • Size on disk: 20.4 GB

  • File count: 245

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds004105.v1.0.0

Provenance

API Reference#

Use the DS004105 class to access this dataset programmatically.

class eegdash.dataset.DS004105(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds004105. Modality: eeg; Experiment type: Attention; Subject type: Healthy. Subjects: 17; recordings: 34; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds004105 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds004105

Examples

>>> from eegdash.dataset import DS004105
>>> dataset = DS004105(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#