DS004657#
Driving with Autonomous Aids
Access recordings and metadata through EEGDash.
Citation: Jason Metcalfe, Amar Marathe, Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). Driving with Autonomous Aids. 10.18112/openneuro.ds004657.v1.0.3
Modality: eeg Subjects: 24 Recordings: 838 License: CC0 Source: openneuro Citations: 1.0
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS004657
dataset = DS004657(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS004657(cache_dir="./data", subject="01")
Advanced query
dataset = DS004657(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds004657,
title = {Driving with Autonomous Aids},
author = {Jason Metcalfe and Amar Marathe and Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King},
doi = {10.18112/openneuro.ds004657.v1.0.3},
url = {https://doi.org/10.18112/openneuro.ds004657.v1.0.3},
}
About This Dataset#
TX20 dataset
Vehicle survivability is critically important in today’s military. Survivability is critically impacted by the performance of human operators – especially as it degrades with various factors. Significant DoD investments have focused on developing and integrating autonomous technologies to mitigate the effects of human error. However, simply implementing autonomy without having a clear plan for integrating with human operators can lead to relatively poor performance and thus low user acceptance. Human trust in automation (TiA) is a well-documented determinant of acceptance and use, but more important than achieving a certain level of trust is to find an appropriate match between the capabilities of the technology and the operator’s trust. Finding means to calibrate TiA to elicit the desired use of the autonomy is an important goal, but requires reliable quantitative indicators that can be continuously monitored. Considerable research on interpersonal trust has revealed measurable patterns of physiological change that correlate significantly with changing levels of subjective trust and trust-based decision making. This research was aimed at facilitating the eventual real-time management of TiA by developing initial psychophysiology-based metrics for monitoring and predicting continuous changes in trust and/or trust-related behaviors.
Complete a semi-automated driving task involving lane maintenance, following distance from a lead vehicle, and collision avoidance (with oncoming traffic and frequently appearing pedestrians). Under certain conditions, an automated driving assistant was available and could be engaged and disengaged at the discretion of the driver. The automated assistant was capable of managing limited aspects of the driving task (maintainance of following distance alone or maintaining following distance and lane position), but was not capable of collision avoidance. Separate driver responses (button presses) were required to successfully avoid collisions with pedestrians.
This research was conducted to develop and validate methods for monitoring and predicting varying degrees of trust in automation (TiA) using both physiological and behavioral metrics characterizing real-time human-automation interactions. The overarching goal of this research was to develop and validate methods for measuring and drawing inferences about TiA, either directly or indirectly through correlated constructs. In particular, we examined operator trust in vehicle automation as it is reflected in changes observed in subjective reports as well as behavioral and physiological state variables during the execution of a shared human-autonomy driving task. The stated aims underlying this goal included: Aim #1: To develop and experimentally validate metrics (dependent variables) that index changes in TiA. Rather than focusing on single-modality metrics, we will record and explore the patterns of correlation and co-variance among a variety of psychophysiological and behavioral variables and focus particularly on metrics that predict decisions around sharing vehicle control with the autonomy in each condition. State measures will be derived from EEG, EOG (electrooculography), ECG, EDA, and gaze position tracking as well as the subject vehicle control behaviors. Aim #2: To develop an understanding of factors (independent variables and covariates) that influence the subject’s TiA. Whereas the Aim #1 targets the identification of metrics, or groups of metrics, that reliably predict trust-based decision-making, here we seek to gain insight as to which factors influence the likelihood and directionality of those same trust-based decisions. Such factors will include real-time tracking of variables such as task load, collision risk, and recent performance history or trending changes in success rate.
Sessions/Conditions SCPB: PractB SCMM: Manual driving SCFB: Full Bad autonomy SCFG: Full Good autonomy SCSB: Speed Bad autonomy SCSG: Speed Good autonomy.
Dataset Information#
Dataset ID |
|
Title |
Driving with Autonomous Aids |
Year |
2023 |
Authors |
Jason Metcalfe, Amar Marathe, Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds004657,
title = {Driving with Autonomous Aids},
author = {Jason Metcalfe and Amar Marathe and Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King},
doi = {10.18112/openneuro.ds004657.v1.0.3},
url = {https://doi.org/10.18112/openneuro.ds004657.v1.0.3},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 24
Recordings: 838
Tasks: 1
Channels: 64 (119), 74 (119)
Sampling rate (Hz): 1024.0 (222), 8192.0 (16)
Duration (hours): 0.0
Pathology: Not specified
Modality: —
Type: —
Size on disk: 43.1 GB
File count: 838
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds004657.v1.0.3
API Reference#
Use the DS004657 class to access this dataset programmatically.
- class eegdash.dataset.DS004657(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds004657. Modality:eeg; Experiment type:Decision-making. Subjects: 24; recordings: 119; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds004657 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds004657
Examples
>>> from eegdash.dataset import DS004657 >>> dataset = DS004657(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset