eegdash.dataset.DS006317#

participants.tsv (OpenNeuro ds006317). Access recordings and metadata through EEGDash.

Modality: [‘eeg’] Tasks: 0 License: CC0 Subjects: 0 Recordings: 0 Source: openneuro

Dataset Information#

Dataset ID

DS006317

Title

participants.tsv

Year

Unknown

Authors

Zihan Zhang, Yu Bao, Tianyi Jiang, Xiao Ding, Xia Liang, Juntong Du, Yi Zhao, Kai Xiong, Bing Qin, Ting Liu

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds006317.v1.1.0

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds006317,
  title = {participants.tsv},
  author = {Zihan Zhang and Yu Bao and Tianyi Jiang and Xiao Ding and Xia Liang and Juntong Du and Yi Zhao and Kai Xiong and Bing Qin and Ting Liu},
  doi = {10.18112/openneuro.ds006317.v1.1.0},
  url = {https://doi.org/10.18112/openneuro.ds006317.v1.1.0},
}

Highlights#

Subjects & recordings
  • Subjects: 0

  • Recordings: 0

  • Tasks: 0

Channels & sampling rate
  • Channels: 124

  • Sampling rate (Hz): 1000.0

  • Duration (hours): 0

Tasks & conditions
  • Tasks: 0

  • Experiment type: Unknown

  • Subject type: Unknown

Files & format
  • Size on disk: Unknown

  • File count: Unknown

  • Format: Unknown

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds006317.v1.1.0

Provenance

Quickstart#

Install

pip install eegdash

Load a recording

from eegdash.dataset import DS006317

dataset = DS006317(cache_dir="./data")
recording = dataset[0]
raw = recording.load()

Filter/query

dataset = DS006317(cache_dir="./data", subject="01")
dataset = DS006317(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Quality & caveats#

  • No dataset-specific caveats are listed in the available metadata.

API#

class eegdash.dataset.DS006317(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds006317. Modality: eeg; Experiment type: Unknown; Subject type: Unknown. Subjects: 2; recordings: 64; tasks: 2.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds006317 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds006317 DOI: https://doi.org/10.18112/openneuro.ds006317.v1.1.0

Examples

>>> from eegdash.dataset import DS006317
>>> dataset = DS006317(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#