DS004993#

WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS

Access recordings and metadata through EEGDash.

Citation: Liberty S. Hamilton, Maansi Desai, Alyssa Field (2024). WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS. 10.18112/openneuro.ds004993.v1.1.2

Modality: ieeg Subjects: 3 Recordings: 30 License: CC0 Source: openneuro Citations: 0.0

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS004993

dataset = DS004993(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS004993(cache_dir="./data", subject="01")

Advanced query

dataset = DS004993(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds004993,
  title = {WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS},
  author = {Liberty S. Hamilton and Maansi Desai and Alyssa Field},
  doi = {10.18112/openneuro.ds004993.v1.1.2},
  url = {https://doi.org/10.18112/openneuro.ds004993.v1.1.2},
}

About This Dataset#

WIRED ICM TUTORIAL DATA

Contributors: Liberty S. Hamilton, PhD, Maansi Desai, PhD, Alyssa Field, MEd

Email: liberty.hamilton@austin.utexas.edu

This is a sample BIDS dataset for the WIRED ICM course in Paris, France in March 2024.

View full README

WIRED ICM TUTORIAL DATA

Contributors: Liberty S. Hamilton, PhD, Maansi Desai, PhD, Alyssa Field, MEd

Email: liberty.hamilton@austin.utexas.edu

This is a sample BIDS dataset for the WIRED ICM course in Paris, France in March 2024.

This contains intracranial recordings collected by the Hamilton Lab at the University of Texas at Austin. These recordings include examples of evoked data during natural listening tasks along with some examples of seizure-related activity and vagus nerve stimulator (VNS) artifact for illustrative purposes. All procedures were approved by the University of Texas at Austin Institutional Review Board.

Funding: Support was provided by the National Institutes of Health National Institute on Deafness and Other Communication Disorders (R01 DC018579, to LSH).

Tasks:

  1. movietrailers - this task involves patients listening to movie clips from various Pixar, Disney, Dreamworks, and other movies. We have published previously using these stimuli in EEG (Desai et al. 2021).

  2. timit4 and timit5 - these tasks involve patients listening to subsets of the TIMIT acoustic phonetic corpus (Garofolo et al 1993). The events provided in the dataset mark the onset and offset of each sentence. In timit4, each sentence is unique, while in timit5, 10 sentences are repeated 10 times. This is the same stimulus set used in Mesgarani et al. 2014, Hamilton et al. 2018, Hamilton et al. 2021, and Desai et. al 2021.

Notes:

  • The movie trailer data for subject W1 was acquired at the start of a generalized tonic clonic seizure, and the research session was terminated. Large, synchronized spikes can be observed on multiple channels on the right parietal grid throughout the iEEG data.

  • The TIMIT data for subject W2 is an example of fairly clean sentence evoked data.

  • The TIMIT data for subject W3 is a good example of on-and-off VNS artifact. The VNS has a strong artifact at ~20 Hz. Some patients with epilepsy may have these implanted devices to help control their seizures, so you should know how to spot artifact-related activity. Despite these artifacts, the evoked responses to sentences are quite strong.

  • The acquisition number (B3, B8, etc) has to do with the order in which this task was run relative to other tasks in an iEEG session, and can be ignored here.

References

  • Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896

  • Desai, M., Holder, J., Villarreal, C., Clark, N., Hoang, B., & Hamilton, L. S. (2021). Generalizable EEG encoding models with naturalistic audiovisual stimuli. Journal of Neuroscience, 41(43), 8946-8962.

  • Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G., & Pallett, D. S. (1993). DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon technical report n, 93, 27403.

  • Hamilton, L. S., Edwards, E., & Chang, E. F. (2018). A spatial map of onset and sustained responses to speech in the human superior temporal gyrus. Current Biology, 28(12), 1860-1871.

  • Hamilton, L. S., Oganian, Y., Hall, J., & Chang, E. F. (2021). Parallel and distributed encoding of speech across human auditory cortex. Cell, 184(18), 4626-4639.

  • Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. https://doi.org/10.1038/s41597-019-0105-7

  • Mesgarani, N., Cheung, C., Johnson, K., & Chang, E. F. (2014). Phonetic feature encoding in human superior temporal gyrus. Science, 343(6174), 1006-1010.

Dataset Information#

Dataset ID

DS004993

Title

WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS

Year

2024

Authors

Liberty S. Hamilton, Maansi Desai, Alyssa Field

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds004993.v1.1.2

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds004993,
  title = {WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS},
  author = {Liberty S. Hamilton and Maansi Desai and Alyssa Field},
  doi = {10.18112/openneuro.ds004993.v1.1.2},
  url = {https://doi.org/10.18112/openneuro.ds004993.v1.1.2},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 3

  • Recordings: 30

  • Tasks: 3

Channels & sampling rate
  • Channels: 148 (2), 160 (2), 106 (2)

  • Sampling rate (Hz): 512.0 (4), 2048.0 (2)

  • Duration (hours): 0.0

Tags
  • Pathology: Epilepsy

  • Modality: Auditory

  • Type: Perception

Files & format
  • Size on disk: 305.1 MB

  • File count: 30

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds004993.v1.1.2

Provenance

API Reference#

Use the DS004993 class to access this dataset programmatically.

class eegdash.dataset.DS004993(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds004993. Modality: ieeg; Experiment type: Perception; Subject type: Epilepsy. Subjects: 3; recordings: 3; tasks: 3.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds004993 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds004993

Examples

>>> from eegdash.dataset import DS004993
>>> dataset = DS004993(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#