DS002893#

Auditory-Visual Shift Study

Access recordings and metadata through EEGDash.

Citation: Marissa Westerfield (data, curation), Scott Makeig (data, curation), Dung Truong (curation), Kay Robbins (curation), Arno Delorme (curation) (2020). Auditory-Visual Shift Study. 10.18112/openneuro.ds002893.v2.0.0

Modality: eeg Subjects: 49 Recordings: 370 License: CC0 Source: openneuro Citations: 1.0

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS002893

dataset = DS002893(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS002893(cache_dir="./data", subject="01")

Advanced query

dataset = DS002893(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds002893,
  title = {Auditory-Visual Shift Study},
  author = {Marissa Westerfield (data, curation) and Scott Makeig (data, curation) and Dung Truong (curation) and Kay Robbins (curation) and Arno Delorme (curation)},
  doi = {10.18112/openneuro.ds002893.v2.0.0},
  url = {https://doi.org/10.18112/openneuro.ds002893.v2.0.0},
}

About This Dataset#

Audio-Visual Attention Shift Experiment

Project name: [Sensory processing in aging]

Years the project ran: 2007-2008

Brief overview of experiment task: The purpose of this Auditory-Visual Attention Shift study was to explore the effects of aging on selective

View full README

Audio-Visual Attention Shift Experiment

Project name: [Sensory processing in aging]

Years the project ran: 2007-2008

Brief overview of experiment task: The purpose of this Auditory-Visual Attention Shift study was to explore the effects of aging on selective attending and responding to auditory and visual stimulus differences using an interleaved dual-oddball audio-visual task design. EEG and EOG channels were acquired.

Data collection. Scalp EEG data were collected from 33 scalp electrode channels, each referred to a right mastoid electrode, within an analogue passband of 0.1 to 60 Hz.

Contact person: Scott Makeig <smakeig@ucsd.edu>, ORCID: 0000-0002-9048-8438.

Access information: Contributed to OpenNeuro.org and NEMAR.org in BIDS format following annotation using HED 8.0.0 in April, 2022.

Independent variables: Stimulus stream (visual, auditory, cue); stimulus stream identity (target, standard); task condition (FA, FV, SH)

Dependent variables: Participant response (correct/incorrect). Button press response attributes (task time window and post-target latency).

Participant pool: The dataset includes data collected from 19 younger adult subjects (8 male, 11 female, ages 20?40 years) and 30 older adult subjects (11 male, 19 female, ages 49-73 years). The subjects were cognitively intact and had normal or adjusted to normal hearing and vision.

Initial setup: EEG data were collected from 33 EEG channels using the 10-20 placement and referenced to the right mastoid. The left mastoid and two EOG channels were also included in the collection. The data was acquired at a sampling rate of 250 Hz with an analog pass band of 0.01 to 60 Hz (SA Instrumentation, San Diego). Input impedances were brought under 5 kilo-ohms by careful scalp preparation.

Task conditions:

  • Focus Visual (FV): participants pressed the response button only in response to target visual stimuli.

  • Focus Auditory (FA): participants pressed the same button only in response to target auditory stimuli.

  • Shift Focus (SF): participants shifted between performing the FV and FA tasks as cued by the preceding

(Look/Hear) cue stimulus.

Task organization: The stimuli were presented in blocks of 264 for a duration of 2.64. In each block there were 12 “Hear” and 12 “Look” cues. A total of 20 blocks were presented for each session. Each experiment began with two non-shift blocks (one each of auditory focus FA and visual focus FV counter-balanced across sessions). These were followed by 12 SF shift blocks. Finally an auditory focus FA group (3 blocks) and a visual focus FV group (3 blocks) were presented. The order of these groups was counter-balanced across experiments. Brief rest periods occur between task blocks. The task condition in the next block was given verbally to the participant during the pre-block rest period.

Task details: Participants respond by finger button press selectively to auditory (brief tones) and visual (colored squares) stimuli constituting distinct, interleaved auditory and visual oddball stimulus streams whose stimuli are presented in randomly interleaved order with stimulus-onset asynchronies (SOAs) varying randomly between 200 and 800 ms.

  • Visual stimuli: were (infrequent, 10%) dark blue target or (frequent, 90%) light blue standard 8.4-cm2 squares presented for 100 ms.

  • Auditory stimuli: were (infrequent, 10%) 550-Hz target or (frequent, 90%) 500-Hz tones with 100 msec duration and 63 dB SPL intensity.

  • Task cue stimuli:*interspersed in the stimulus sequence at mean 5-sec intervals and consisting of the simultaneous spoken and printed display of one of the words*Look*or*Hear each presented for 200 msec.

Additional data acquired: Participants had no history of major neurological, psychiatric, or medical disorders. All had normal or adjusted to normal vision and hearing (none wore hearing aids). Verbal and performance IQ were assessed using the WASI-III (Wechsler, 1997). There were no significant differences between the groups in IQ measures or years of education. Participants in the Older group received a battery of neuropsychological tests to assure normal cognitive functioning, including the Mini Mental State Exam (MMSE) (Folstein et al., 1975), Dementia Rating Scale (DRS) (Mattis, 1988), Wechsler Memory Scale.

Experiment location: Department of Psychiatry laboratory of Jeanne Townsend, University of California San Diego, La Jolla CA (USA).

Note 1: ERP measure results for the FA and FV conditions only were presented in Ceponiene, R., Westerfield, M., Torki, M. and Townsend, J., 2008. Modality-specificity of sensory aging in vision and audition: evidence from event-related potentials.?Brain research,?1215, pp.53-68. Some unpublished results by Christian Kothe and Scott Makeig on the SH condition may be available from the authors <christiankothe@gmail.com> <smakeig@ucsd.edu>.

Note 2: The code subdirectory has several auxilliary files that were produced during the curation process. The curation was done using a series of Jupyter notebooks that are available as run in the code/curation_notebooks subdirectory.

During the running of these curation notebooks information about the status was logged using the HEDLogger. The output of the logging process is in code/curation_logs.

Updated versions of the curation notebooks can be found at: hed-standard/hed-examples.

Dataset Information#

Dataset ID

DS002893

Title

Auditory-Visual Shift Study

Year

2020

Authors

Marissa Westerfield (data, curation), Scott Makeig (data, curation), Dung Truong (curation), Kay Robbins (curation), Arno Delorme (curation)

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds002893.v2.0.0

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds002893,
  title = {Auditory-Visual Shift Study},
  author = {Marissa Westerfield (data, curation) and Scott Makeig (data, curation) and Dung Truong (curation) and Kay Robbins (curation) and Arno Delorme (curation)},
  doi = {10.18112/openneuro.ds002893.v2.0.0},
  url = {https://doi.org/10.18112/openneuro.ds002893.v2.0.0},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 49

  • Recordings: 370

  • Tasks: 1

Channels & sampling rate
  • Channels: 36 (52), 33 (52)

  • Sampling rate (Hz): 250.0 (84), 250.0293378038558 (20)

  • Duration (hours): 0.0

Tags
  • Pathology: Not specified

  • Modality: —

  • Type: —

Files & format
  • Size on disk: 7.7 GB

  • File count: 370

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds002893.v2.0.0

Provenance

API Reference#

Use the DS002893 class to access this dataset programmatically.

class eegdash.dataset.DS002893(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds002893. Modality: eeg; Experiment type: Attention; Subject type: Healthy. Subjects: 49; recordings: 52; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds002893 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds002893

Examples

>>> from eegdash.dataset import DS002893
>>> dataset = DS002893(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#