eegdash.dataset.DS006253#

MetaRDK (OpenNeuro ds006253). Access recordings and metadata through EEGDash.

Modality: [‘ieeg’] Tasks: 0 License: CC0 Subjects: 0 Recordings: 0 Source: openneuro

Dataset Information#

Dataset ID

DS006253

Title

MetaRDK

Year

Unknown

Authors

Dorian Goueytes, Francois Stockart, Alexis Robin, Lucien Gyger, Martin Rouy, Dominique Hoffmann, Lorella Minotti, Philippe Kahane, Michael Pereira, Nathan Faivre

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds006253.v1.0.3

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds006253,
  title = {MetaRDK},
  author = {Dorian Goueytes and Francois Stockart and Alexis Robin and Lucien Gyger and Martin Rouy and Dominique Hoffmann and Lorella Minotti and Philippe Kahane and Michael Pereira and Nathan Faivre},
  doi = {10.18112/openneuro.ds006253.v1.0.3},
  url = {https://doi.org/10.18112/openneuro.ds006253.v1.0.3},
}

Highlights#

Subjects & recordings
  • Subjects: 0

  • Recordings: 0

  • Tasks: 0

Channels & sampling rate
  • Channels: Unknown

  • Sampling rate (Hz): Unknown

  • Duration (hours): 0

Tasks & conditions
  • Tasks: 0

  • Experiment type: Unknown

  • Subject type: Unknown

Files & format
  • Size on disk: Unknown

  • File count: Unknown

  • Format: Unknown

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds006253.v1.0.3

Provenance

Quickstart#

Install

pip install eegdash

Load a recording

from eegdash.dataset import DS006253

dataset = DS006253(cache_dir="./data")
recording = dataset[0]
raw = recording.load()

Filter/query

dataset = DS006253(cache_dir="./data", subject="01")
dataset = DS006253(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Quality & caveats#

  • No dataset-specific caveats are listed in the available metadata.

API#

class eegdash.dataset.DS006253(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds006253. Modality: ieeg; Experiment type: Unknown; Subject type: Decision-Making, Metacognition. Subjects: 23; recordings: 201; tasks: 4.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds006253 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds006253 DOI: https://doi.org/10.18112/openneuro.ds006253.v1.0.3

Examples

>>> from eegdash.dataset import DS006253
>>> dataset = DS006253(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#