eegdash.dataset.DS004011#

participants.tsv (OpenNeuro ds004011). Access recordings and metadata through EEGDash.

Modality: [‘meg’] Tasks: 0 License: CC0 Subjects: 0 Recordings: 0 Source: openneuro

Dataset Information#

Dataset ID

DS004011

Title

participants.tsv

Year

2022

Authors

Lina Teichmann, Denise Moerel, Anina Rich, Chris Baker

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds004011.v1.0.3

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds004011,
  title = {participants.tsv},
  author = {Lina Teichmann and Denise Moerel and Anina Rich and Chris Baker},
  doi = {10.18112/openneuro.ds004011.v1.0.3},
  url = {https://doi.org/10.18112/openneuro.ds004011.v1.0.3},
}

Highlights#

Subjects & recordings
  • Subjects: 0

  • Recordings: 0

  • Tasks: 0

Channels & sampling rate
  • Channels: 271

  • Sampling rate (Hz): 1200.0

  • Duration (hours): 0

Tasks & conditions
  • Tasks: 0

  • Experiment type: Unknown

  • Subject type: Unknown

Files & format
  • Size on disk: Unknown

  • File count: Unknown

  • Format: Unknown

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds004011.v1.0.3

Provenance

Quickstart#

Install

pip install eegdash

Load a recording

from eegdash.dataset import DS004011

dataset = DS004011(cache_dir="./data")
recording = dataset[0]
raw = recording.load()

Filter/query

dataset = DS004011(cache_dir="./data", subject="01")
dataset = DS004011(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Quality & caveats#

  • No dataset-specific caveats are listed in the available metadata.

API#

class eegdash.dataset.DS004011(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds004011. Modality: meg; Experiment type: Unknown; Subject type: Unknown. Subjects: 22; recordings: 132; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds004011 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds004011 DOI: https://doi.org/10.18112/openneuro.ds004011.v1.0.3

Examples

>>> from eegdash.dataset import DS004011
>>> dataset = DS004011(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#