DS004123#

BCIT Traffic Complexity

Access recordings and metadata through EEGDash.

Citation: Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Justin Brooks (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) (2022). BCIT Traffic Complexity. 10.18112/openneuro.ds004123.v1.0.0

Modality: eeg Subjects: 29 Recordings: 273 License: CC0 Source: openneuro Citations: 0.0

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS004123

dataset = DS004123(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS004123(cache_dir="./data", subject="01")

Advanced query

dataset = DS004123(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds004123,
  title = {BCIT Traffic Complexity},
  author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Justin Brooks (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)},
  doi = {10.18112/openneuro.ds004123.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds004123.v1.0.0},
}

About This Dataset#

BCIT Traffic Complexity

Introduction

Overview: The Traffic Complexity study was designed to collect extended time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance

View full README

BCIT Traffic Complexity

Introduction

Overview: The Traffic Complexity study was designed to collect extended time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers. Similar to the Baseline Driving study, the Speed Control study was intended to identify periods of driver fatigue via predictive algorithms formulated from the analysis of driver EEG data, in comparison to the objective performance measures, and in contrast with the (non-fatigued) Calibration driving session for the subject. Traffic Complexity extended the paradigm by modulating the visual complexity and the frequency of perturbation events vs. Baseline Driving.

Further information is available on request from cancta.net_.

Methods

Subjects: Volunteers from the local community recruited through advertisements.

Apparatus: Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI); Video Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz); EEG (BioSemi 64 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=2048 Hz); Eye Tracking (Sensomotoric Instruments (SMI); REDEYE250).

Initial setup: Upon arrival to the lab, subjects were given an introduction to the primary study for which they were recruited and provided informed consent and provided demographics information. This was followed by a practice session, to acclimate the subject to the driving simulator. The driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control was demonstrated and lack of motion sickness was reported. Subjects were then outfitted and prepped for eye tracking and EEG acquisition.

Task organization: Subjects would perform the Baseline Driving task and the Traffic Complexity task, with counter-balancing used across subjects as to which of them came first. The Baseline Driving run was 45 minutes of continuous driving, with subjects responsible for speed and steering control. Both driving tasks were conducted on the same simulated long, straight road. The Baseline run was done in a visually sparse environment, and the Traffic Complexity runs included pedestrians and other traffic. In each case, the subject was instructed to stay within the boundaries of the right-most lane, and to drive at the posted speed limits.

The vehicle was periodically subject to lateral perturbing forces, which could be applied to either side of the vehicle, pushing the vehicle out of the center of the lane; and the subject was instructed to execute corrective steering actions to return the vehicle to the center of the lane.

Independent variables: Visual Complexity (high vs. low), Perturbation Frequency (high vs. low).

Dependent variables: Reaction times to perturbations, continuous performance based on vehicle log (steering wheel angle, lane position, heading error, etc.), Task-Induced Fatigue Scale (TIFS), Karolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F).

Note: Questionnaire data is available upon request from cancta.net_.

Additional data acquired: Participant Enrollment Questionnaire, Subject Questionnaire for Current Session, Simulator Sickness Questionnaire.

Experimental Locations: Teledyne Corporation, Durham, NC.

Note 1: This dataset has a corresponding dataset in the BCIT Calibration Driving ds004118 which has the 15 minute driving task performed prior to this one.

Note 2: This dataset has a corresponding dataset in the BCIT Baseline Driving ds004120 which was a longer driving task in a sparse environment.

Dataset Information#

Dataset ID

DS004123

Title

BCIT Traffic Complexity

Year

2022

Authors

Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Justin Brooks (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation)

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds004123.v1.0.0

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds004123,
  title = {BCIT Traffic Complexity},
  author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Justin Brooks (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)},
  doi = {10.18112/openneuro.ds004123.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds004123.v1.0.0},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 29

  • Recordings: 273

  • Tasks: 1

Channels & sampling rate
  • Channels: 74

  • Sampling rate (Hz): 1024.0

  • Duration (hours): 0.0

Tags
  • Pathology: Healthy

  • Modality: Visual

  • Type: Attention

Files & format
  • Size on disk: 17.5 GB

  • File count: 273

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds004123.v1.0.0

Provenance

API Reference#

Use the DS004123 class to access this dataset programmatically.

class eegdash.dataset.DS004123(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds004123. Modality: eeg; Experiment type: Attention; Subject type: Healthy. Subjects: 29; recordings: 30; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds004123 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds004123

Examples

>>> from eegdash.dataset import DS004123
>>> dataset = DS004123(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#